CVHanwen Guo [0000]
CVHanwen Guo [0000]
Research Interests
Research Interests
Programming Languages, Type Systems, Gradual Typing
Or: How to reduce the friction and improve the ergonomics of adopting strong guarantees?
Education
Education
B.Eng. in Computer Science
Publications
Publications
ArticleIf-T: A Benchmark for Type Narrowing [guo-IfTBenchmarkTypeNarrowing-2025]
ArticleIf-T: A Benchmark for Type Narrowing [guo-IfTBenchmarkTypeNarrowing-2025]
Abstract: Context: The design of static type systems that can validate dynamically-typed programs (gradually) is an ongoing challenge. A key difficulty is that dynamic code rarely follows datatype-driven design. Programs instead use runtime tests to narrow down the proper usage of incoming data. Type systems for dynamic languages thus need a type narrowing mechanism that refines the type environment along individual control paths based on dominating tests, a form of flow-sensitive typing. In order to express refinements, the type system must have some notion of sets and subsets. Since set-theoretic types are computationally and ergonomically complex, the need for type narrowing raises design questions about how to balance precision and performance.
Inquiry: To date, the design of type narrowing systems has been driven by intuition, past experience, and examples from users in various language communities. There is no standard that captures desirable and undesirable behaviors. Prior formalizations of narrowing are also significantly more complex than a standard type system, and it is unclear how the extra complexity pays off in terms of concrete examples. This paper addresses the problems through If-T, a language-agnostic design benchmark for type narrowing that characterizes the abilities of implementations using simple programs that draw attention to fundamental questions. Unlike a traditional performance-focused benchmark, If-T measures a narrowing system’s ability to validate correct code and reject incorrect code. Unlike a test suite, systems are not required to fully conform to If-T. Deviations are acceptable provided they are justified by well-reasoned design considerations, such as compile-time performance.
Approach: If-T is guided by the literature on type narrowing, the documentation of gradual languages such as TypeScript, and experiments with typechecker implementations. We have identified a set of core technical dimensions for type narrowing. For each dimension, the benchmark contains a set of topics and (at least) two characterizing programs per topic: one that should typecheck and one that should not typecheck.
Knowledge: If-T provides a baseline to measure type narrowing systems. For researchers, it provides criteria to categorize future designs via its collection of positive and negative examples. For language designers, the benchmark demonstrates the payoff of typechecker complexity in terms of concrete examples. Designers can use the examples to decide whether supporting a particular example is worthwhile. Both the benchmark and its implementations are freely available online.
Grounding: We have implemented the benchmark for five typecheckers: TypeScript, Flow, Typed Racket, mypy, and Pyright. The results highlight important differences, such as the ability to track logical implications among program variables and typechecking for user-defined narrowing predicates.
Importance: Type narrowing is essential for gradual type systems, but the tradeoffs between systems with different complexity have been unclear. If-T clarifies these tradeoffs by illustrating the benefits and limitations of each level of complexity. With If-T as a way to assess implementations in a fair, cross-language manner, future type system designs can strive for a better balance among precision, annotation burden, and performance.
InproceedingsStatistical Type Inference for Incomplete Programs [peng-StatisticalTypeInferenceIncompletePrograms-2023]
InproceedingsStatistical Type Inference for Incomplete Programs [peng-StatisticalTypeInferenceIncompletePrograms-2023]
Abstract: We propose a novel two-stage approach, Stir, for inferring types in incomplete programs that may be ill-formed, where whole-program syntactic analysis often fails. In the first stage, Stir predicts a type tag for each token by using neural networks, and consequently, infers all the simple types in the program. In the second stage, Stir refines the complex types for the tokens with predicted complex type tags. Unlike existing machine-learning-based approaches, which solve type inference as a classification problem, Stir reduces it to a sequence-to-graph parsing problem. According to our experimental results, Stir achieves an accuracy of 97.37% for simple types. By representing complex types as directed graphs (type graphs), Stir achieves a type similarity score of 77.36% and 59.61% for complex types and zero-shot complex types, respectively.
Honors & Awards
Honors & Awards
Teaching
Teaching
Teaching Assistant
University of Utah
Teaching Assistant
University of Utah