Surface acoustic wave (SAW) sensors with increasingly unique and refined designed patterns are often developed using the lithographic fabrication processes. Emerging applications of SAW sensors often require the use of novel materials, which may present uncharted fabrication outcomes. The fidelity of the SAW sensor performance is often correlated with the ability to restrict the presence of defects in post-fabrication. Therefore, it is critical to have effective means to detect the presence of defects within the SAW sensor. However, due to the need for precision identification and classification of surface features for increased confidence in model accuracy, labor-intensive manual labeling is often required. One approach to automating defect detection is to leverage effective machine-learning techniques to analyze and quantify defects within the SAW sensor. In this paper, we propose a machine-learning approach using a deep convolutional autoencoder to segment surface features semantically. The proposed deep image autoencoder takes a grayscale input image and generates a color image segmenting the defect region in red, metallic interdigital transducing (IDT) fingers in green, and the substrate region in blue. Experimental results demonstrate promising segmentation scores in locating the defects and regions of interest for a novel SAW sensor variant. The proposed method can automate the process of localizing and measuring post-fabrication defects at the pixel level that may be missed by error-prone visual inspection.