Breast cancer high survival rate led to an increased interest in the quality of life after treatment, particularly regarding the aesthetic outcome. Currently used aesthetic assessment methods are subjective, which make reproducibility and impartiality impossible. To create an objective method capable of being selected as the gold standard, it is fundamental to detect, in a completely automatic manner, keypoints in photographs of women's torso after being subjected to breast cancer surgeries. This paper proposes a deep and a hybrid model to detect keypoints with high accuracy. Our methods are tested on two datasets, one composed of images with a clean and consistent background and a second one that contains photographs taken under poor lighting and background conditions. The proposed methods represent an improvement in the detection of endpoints, nipples and breast contour for both datasets in terms of average error distance when compared with the current state-of-the-art.