Skip to content

Design Meeting Notes, 11/30/2022 #51811

Closed
@DanielRosenwasser

Description

@DanielRosenwasser

--moduleResolution hybrid

  • Very Node-like resolution mode that is mostly targeted at bundlers and special resolvers.
  • What's it have that's special (beyond the weird classic mode)?
    • node_modules package lookup
    • Extensionless
      • Not in --moduleResolution node16/nodenext
    • Directory index lookup (i.e. look up an index.* file)
      • Not in --moduleResolution node16/nodenext
    • exports look-up from package.json
      • Not in --moduleResolution node
    • *.ts imports
      • Nothing has this today!
      • Only when using --allowImportingTsExtensions
  • --allowImportingTsExtensions
    • Could potentially be ported to work on other resolution
    • Requires --noEmit
      • Why?
        • When we emit, we will not rewrite the import paths. So

          import * as foo from "./foo.ts"`

          will remain the same in the output .js file:

          import * as foo from "./foo.ts"`

          and this will fail in most tools if foo was also rewritten to a JavaScript file named foo.js!

      • Basically some other tool is going to either handle compilation of the files, or resolution of the files directly - so
  • We keep talking about what a bundler does (in the meeting) - why isn't this just called --moduleResolution bundler?
    • Other loaders/runtimes like ts-node, bun, etc.
    • Even if it's not perfect, bundler communicates more than bundler
  • What else is controversial other than the name?
  • More options than just allowImportingTsExtensions
    • allowImportingTsExtensions (already mentioned)
    • resolvePackageJsonExports - resolve from the exports field the way Node.js 12+ does today (which we do under the node12/nodenext flag)
    • resolvePackageJsonImports - resolve from the imports field the way Node.js 16+ does today (which we do under the node12/nodenext flag)
    • customConditions
  • Does customConditions need
  • We would prefer there were some sort of hierarchy to the flags, but not sure.
  • Do we need to ship all the flags?
    • Browserify and Rollup don't do certain things by default.
    • Rollup might be okay b
  • Prefer to ship as is, evaluate over the rest of the release cycle.

Deprecation Plan

  • --ignoreDeprecations
    • Specify "5.0" to suppress all the errors that 5.x has deprecated.
    • When 6.0 hits, using "ignoreDeprecations": "5.0" becomes an error.
  • --noSwitchCaseFallthrough
    • We want to get out of the syntax linting business, but people likely use it.
    • Not a huge cost anyway.
    • Feels like we're keeping this.
  • --out?
    • Totally broken?
    • No it's not, only broken for module
    • Only if we give a good error message.
    • Deprecate.
  • --charset?
    • Doesn't get respected anyway.
    • Remove.
  • --target es3
    • Deprecate.
    • Complicates our testing infra now.
  • "prepend": true in project references?
    • We did this for ourselves - but is this widely in use?
    • We see one or two legit repos using it via GitHub's code search - the rest of the results seem to be duplicates of our repo.
    • This complicates our emit quite a bit.
    • okay.
  • Meta: we have two-and-a-half years to change our minds on this

Using our internal missing type

#51653

  • Unfortunate that we have so many internal types and different flags!

in Operator Narrowing from Negative Checks

#51339

  • If you have obj: object and write code like

    if ("foo" in obj && typeof obj.foo === "string") {
      obj.foo;
    }

    you would expect obj.foo to be valid and have the type string.

  • If you write the negative case and bail our early, you'd expect the same

    if (!("foo" in obj) || typeof obj.foo !== "string") {
      return;
    }
    
    obj.foo;
  • This is the same problem as the fact that obj itself is narrowed in a distinct manner from obj.foo.

  • It would really be ideal if obj was narrowed to { foo: string } & object, and narrowing obj.foo would just "fall out" from narrowing obj itself.

  • Can imagine that we just stack intersections as we learn more

    • e.g. object & { foo: unknown } & { foo: string } which simplifies to object & { foo: string }
  • The problem with doing that is you would end up with huge types in some cases - which adds visual noise and impacts performance.

  • Additionally, you can "learn" information by introducing intersections when narrowing types - but when you join from two branches, how do you know which intersections were "learned" from type guards vs. which intersections were already there.

    • Feels doable, making this efficient also makes this harder.
    • We eagerly normalize intersections which is part of what makes this challenging.
  • There's a silver lining - simplifying narrowing to just narrow the roots of references would make us more efficient in other ways.

    • Also, deferring intersections could be faster.
  • Is this compelling?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Design NotesNotes from our design meetings

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions