Scaling supply voltage to the Near-Threshold Voltage (NTV) region is an effective approach for energy-constrained circuit design at the cost of performance reduction. However, the impacts of process and runtime variations on the circuits operating at NTV aggravate due to the larger sensitivity to variations. This sensitivity changes the performance and power consumption of the circuit impacting the Minimum Energy Point (MEP). Therefore, finding an optimum operating voltage for Near-Threshold Computing (NTC) in a per-chip basis to account for variability is very challenging. In this paper, we propose a post-fabrication calibration approach based on machine learning in which the optimal supply voltage of each chip is determined during manufacturing test according to basic characteristics of the circuits such as dynamic and leakage power. Unlike the conventional approaches which find the MEP during runtime, the proposed technique imposes almost no overhead (area and power) to the circuit. The simulation results show a significant improvement (59.1%) in the energy efficiency compared to the state-of-the-art approaches.