Tensors are fundamental in representing high-dimensional data across various application domains, including machine learning and quantum many-body simulation. Calculations on high-dimensional data represented as tensors are costly in terms of memory, power, and computation time. Tensor decomposition can reduce the dimensionality of the high-dimensional tensors by representing them as a sum or product of several low-dimensional tensors. In this paper, we present an automated framework for tensor decomposition for a series of operations in an application. Specifically, this framework automatically determines the beneficial tensor decomposition based on resource constraints. Experimental results demonstrate that our framework can drastically reduce memory requirements while providing accuracy comparable to state-of-the-art methods.