Circuit netlists are naturally represented as directed graphs, making them ideal for directed graph representation learning (DGRL) to analyze and predict circuit properties. These DGRL-based encoders offer fast, cost-effective alternatives to traditional simulation and enhance downstream EDA workflows. However, reliable prediction on directed circuit graphs remains challenging. Task-specific heuristics often limit generalization across diverse design problems, while naïve directed message-passing neural networks (MPNNs) struggle to capture both absolute and relative node positions, as well as long-range dependencies. To address these challenges, we first introduce general circuit graph encoder architectures with enhanced expressiveness for capturing long-range directional and logical dependencies. Our models use graph isomorphism networks (GINs) and graph transformers as backbones, incorporating bidirected message passing and stable positional encodings. Second, to jointly encode the inductive biases of circuit structure and functionality, we adopt a pretraining–finetuning pipeline. Encoders are pretrained using a novel, sample-efficient graph contrastive learning framework on unlabeled circuit data, augmented with hard negatives generated through functional and topological perturbations, and then finetuned with lightweight task-specific heads. This combination of more expressive graph encoders and sample-efficient graph contrastive learning substantially enhances representational capacity, resulting in general-purpose directed circuit graph encoders that can be applied across a broad range of design tasks. Evaluation on symbolic reasoning and quality-of-results (QoR) prediction tasks demonstrates consistent improvements over task-specific baselines. Thor is available at: https://github.com/ORCA-lab/Thor.