Near-threshold computing has emerged as a promising solution for drastically improving the energy efficiency of CMOS circuits. The paper proposes a linear delay model that can be used for optimum gate sizing of near-threshold circuits. To the best of our knowledge, this is the first analytical model that considers not only delay variation of gates but also a correlation between slew rates of adjacent gates. Then, we discuss an analytical approach using the model for minimizing buffer delay. Based on the approach, we develop a gate sizing methodology which achieves both higher energy efficiency and less delay than a methodology based on the theory of logical effort. Finally, we show a result with transistor-level circuit simulation using a commercial 28-nm process technology model. Simulation results show that our gate sizing achieves up to 19% smaller delay and 23% smaller energy consumption than a methodology based on the theory of logical effort.