Tech & Engineering Blog

Building a Core Edge Computing Library in TypeScript, Part II

Ori Gold

February 11, 2025

Categories: Technology and Engineering

Building a Core Edge Computing Library in TypeScript, Part II

A few months ago, I wrote about how my team built a business-logic-only library for use across a variety of different edge computing platforms. By leveraging TypeScript, encapsulating our code, and abstracting away all platform-dependent logic, we were able to deliver higher quality code faster than ever before. If you haven’t read the article already, I encourage you to go back and do so.

As we built new libraries for more and more platforms, we iterated on the core library to ensure it stayed generic and usable across all the different edge computing platforms that we support. I thought I would share a few more principles and practices we’ve adopted.

Use interfaces. Yes, even more.

TL;DR: Functionality you might think is basic, standard, and built into JavaScript may not be available in the environment you’re developing for. You can provide a default implementation, but always define it as an interface first. You never know when you’ll need to replace it.

In the previous article, I mentioned how we used dependency inversion and interfaces when using libraries. Whenever importing a new dependency, we always defined an interface and then encapsulated the package in a class that implements that interface.

For example, I referred to wrapping the uuid package in a class (e.g., DefaultUuidGenerator) that implements a defined interface (e.g., IUuidGenerator). This way, if a platform we’re working on does not support the uuid package for any reason, the uuid package-based implementation can be swapped with a different one (e.g., SomeOtherUuidGenerator) easily.

I covered this pretty extensively in the previous article. So why am I bringing it up yet again? Because we can’t trust every platform to implement even the native functions in JavaScript.

Consider the class URL, which exists both in Web API and in Node.js. It’s ubiquitous and considered pretty standard in any JavaScript environment. And still, we ran into a platform that did not support it natively.

We tried adding WHATWG-URL as a polyfill, but adding in even this relatively small amount of code inflated the library such that the platform was unable to deploy a code bundle of that size.

We tried writing a leaner URL polyfill, implementing only those members and functions that our core library actually uses. This caused a different problem: the customized URL polyfill meant for internal use only would now be available in the global scope, free to be used — or more likely misused — by the end user. What if they accessed a member or function that we purposefully didn’t implement? What if adding in our polyfill actually replaced a polyfill they intended to use?

That’s when we realized that even supposedly built-in functionality — from URL to Date— needed to be encapsulated and abstracted. We created interfaces that defined only those members and methods we planned to use in our core library, but in a way that overlapped with expected built-in classes.

// Core library
interface IUrl {
   href: string;
   host: string;
   pathname: string;
}

For example, the built-in URL class contains a number of different members and functions, but our core library only used a subset of them. So we reduced our IUrl interface to only what our core library actually needed (href, host, and pathname), and ensured the members and methods were defined such that the built-in URL class would implement the interface automatically.

Of course, we also needed an IUrlUtils interface as a way to create a new instance of a class that implements the IUrl interface. 

// Core library
import { IUrl } from './IUrl';
export interface IUrlUtils {
   createUrl(rawUrl: string): IUrl;
   // ...
}

And finally, a default implementation that relies on the built-in URL class.

// Core library
import { IUrl } from './IUrl';
import { IUrlUtils } from './IUrlUtils';
export class DefaultUrlUtils implements IUrlUtils {
   public createUrl(rawUrl: string): IUrl {
       return new URL(rawUrl);
   }
   // ...
}

This encapsulates all our usage of URL (and all other URL utilities) into the DefaultUrlUtils class. Since it’s an interface, we can replace the default implementation with a different one without changing any of the code in our core library, and without polluting the global scope.

Isolate export paths for all external package-dependent classes.

TL;DR: Components in your core library that depend on third party packages should be exported individually and separately. Don’t bundle them in with all the other exports in your core library because you don’t know what code will run when they’re imported.

To streamline and simplify how we export classes from the core library, my team agreed on the convention of including an index.ts file in every directory that exported all files and directories inside of it.

For example, let’s say we had this file structure in our core library:

src/
├── http/
│    ├── IHttpClient.ts
│    ├── PhinHttpClient.ts
│    ├── constants.ts
│    └── index.ts
├── uuid/
│    ├── IUuidGenerator.ts
│    ├── DefaultUuidGenerator.ts
│    └── index.ts
└── index.ts

Here, IHttpClient and IUuidGenerator are interfaces, the constants file contains primitive constants only, DefaultUuidGenerator is the default implementation based on the uuid package, and PhinHttpClient is an implementation based on the phin package.

We also have three different index.ts files: one in src/http, one in src/uuid, and one in src. The contents of these files look like this:

// src/http/index.ts
export * from './IHttpClient';
export * from './PhinHttpClient';
export * from './constants';
// src/uuid/index.ts
export * from './IUuidGenerator';
export * from './DefaultUuidGenerator';
// src/index.ts
export * from './http';
export * from './uuid';

Our src folder recursively exports the contents of all the files it contains — all types, interfaces, classes, constants, functions so that they are all accessible when the core library is imported into the platform library. All components are exported together, which means that they are all imported together as well.

Importing the core library from the platform library will look like this:

// Platform library
import { IHttpClient } from 'core-library';

This seems easy and innocent enough. We’re importing only the IHttpClient interface, right?

Wrong. This approach is problematic because importing any part of the core library will import all of it. So the import statement above doesn’t just import the IHttpClient interface; it imports the entire core library.

But… wait. Why can’t we import the entire core library? Doing so only defines the constants, functions, types, interfaces, and classes that we can use; it doesn’t actually execute any logic, right? And since the core library is meant to be platform-agnostic anyway, what’s the big deal?

Remember the DefaultUuidGenerator and PhinHttpClient classes? They’re being imported, too. And remember how these classes depend on the external NPM packages? You know what that means: we’re bringing the uuid and phin libraries into the mix as well. And who knows what these dependencies are doing? They might make incorrect assumptions about the runtime environment and execute code as part of the import statement. Or they might import other dependencies that do these things. 

I say this as though it’s theory, but this is exactly what happened in one of the platforms we were working on. So yeah, it was definitely a problem.

What we decided to do was separate the import paths for those implementations that depend on external dependencies. This way, platform libraries can import only the implementations (and corresponding external dependencies) that are actually used.

It looks something like this:

// Platform library
// core library components that don't require any external dependency
import { IHttpClient, IUuidGenerator } from 'core-library';
// core library components that require external dependencies
import { DefaultUuidGenerator } from 'core-library/impl/uuid/DefaultUuidGenerator';
import { PhinHttpClient } from 'core-library/impl/http/PhinHttpClient';

How did we achieve this?

First, we changed our core library file structure such that all implementations were contained to the src/impl directory. We made sure not to export this directory from the src/index.ts file.

src/
├── http/
│    ├── IHttpClient.ts
│    ├── constants.ts
│    └── index.ts
├── uuid/
│    ├── IUuidGenerator.ts
│    └── index.ts
├── impl/
|    ├── uuid/
│    |    └── DefaultUuidGenerator.ts
|    └── http/
│         └── PhinHttpClient.ts
└── index.ts
// src/index.ts
export * from './http';
export * from './uuid';
// purposefully not exporting from ./impl!

Then we defined subpath exports in our package.json file that route the import paths with the directory paths within the library. (Note that we export files from the lib directory, which is identical in structure to the src directory except with compiled JavaScript files rather than TypeScript source code.)

{
 "name": "core-library",
 "version": "1.0.0",
 "exports": {
   ".": "./lib/index.js",
   "./impl/*": "./lib/impl/*.js"
 } 
}

This allows platform libraries to freely import whatever they need from the main core-library import without worrying about potentially incompatible third-party dependencies hitching a ride. If they need a specific implementation that our core library provides, they can import it separately and explicitly. 

Build and publish your libraries as both CommonJS and ES Modules.

TL;DR: Some users will use CommonJS. Others will use ES Modules. Support both.

You can use various packages like Babel or Rollup, but TypeScript natively supports compilation to ES Modules and to CommonJS. We achieved this by creating a base tsconfig.json file with common TypeScript compilation configurations and more specialized tsconfig.*.json files that extend the base one.

The base tsconfig.json file looks more or less like this:

{
 "compilerOptions": {
   "target": "ES2022",
   "module": "ES6",
   "moduleResolution": "Node",
   // ...
 },
 "include": ["src"],
 "exclude": ["lib"]
}

Then, we created a tsconfig.esm.json for ES Modules build files. It extends the base tsconfig.json file and adds an outDir setting so that all ES Modules build files will be output to the lib/esm directory.

{
 "extends": "./tsconfig.json",
 "compilerOptions": {
   "outDir": "lib/esm",
 }
}

We can do the same for tsconfig.cjs.json, except this time we also need to override the default target and module settings to specify CommonJS.

{
 "extends": "./tsconfig.json",
 "compilerOptions": {
   "target": "ES5",
   "module": "CommonJS",
   "outDir": "lib/cjs"
 }
}

Now, we’ve got our two sets of build files in the lib/esm and lib/cjs directories. We included one final tsconfig.dec.json to output our TypeScript declaration files into a separate directory, since these declaration files are the same for both.

{
 "extends": "./tsconfig.json",
 "compilerOptions": {
   "declaration": true,
   "emitDeclarationOnly": true,
   "outDir": "lib/types"
 }
}

Then, we can adjust our package.json to properly export the ES Modules, CommonJS, and TypeScript definition files.

// package.json
{
 "name": "core-library",
 "type": "module",
 "typesVersions": {
   "*": {
     "*": [
       "lib/types/*"
     ]
   }
 },
 "exports": {
   ".": {
     "types": "./lib/types/index.d.ts",
     "import": "./lib/esm/index.js",
     "require": "./lib/cjs/index.js",
     "default": "./lib/cjs/index.js"
   },
   "./impl/*": {
     "types": "./lib/types/impl/*.d.ts",
     "import": "./lib/esm/impl/*.js",
     "require": "./lib/cjs/impl/*.js",
     "default": "./lib/cjs/impl/*.js"
   }
 },
 "scripts": {
   "build": "npm run build:cjs && npm run build:esm && npm run build:dec",
   "build:cjs": "tsc -p tsconfig.cjs.json && echo '{\"type\":\"commonjs\"}' >> ./lib/cjs/package.json",
   "build:esm": "tsc -p tsconfig.esm.json",
   "build:dec": "tsc -p tsconfig.dec.json",
   // ...
 }
 // ...
}

Notice a few things about this file.

  • We have a typesVersions property that indicates where our type definitions are. Defining the types versions in this way allows for importing types from the main core library and from the impl submodule. (This causes some annoying behavior when importing from the library. If you have a better way of doing this, we’re all ears!)
  • The package.json file has the “type”: “module” property, which defines the library as an ES Modules project. This is fine for the lib/esm directory, but it means the compiled JavaScript files in the lib/cjs directory will be incorrectly interpreted as ES Modules. To ensure our lib/cjs files are interpreted as CommonJS, we need to add a lib/cjs/package.json file with the contents {“type”: “commonjs”}.

Protect, don’t privatize, your core library methods and properties.

TL;DR: Keep methods and properties protected rather than private to make core library classes more flexible and reusable.

Supporting tons of different runtimes means that more often than not, our platform libraries need to make all sorts of tweaks to the core library behavior. And you never know where these tweaks might be needed.

Let’s say that part of our core functionality requires sending asynchronous HTTP requests with telemetry data. So we wrote this class in the core library.

// Core library
export class HttpTelemetryClient implements ITelemetryClient {
 private readonly httpClient: IHttpClient;
 constructor(httpClient: IHttpClient) {
   this.httpClient = httpClient;
 }
 public async sendTelemetry(context: IContext): Promise<void> {
   const telemetry = this.createTelemetry(context);
   await this.sendTelemetry(telemetry);
 }
 private createTelemetry(context: IContext): Telemetry {
   // ...
   return {
     // data
   };
 }
 private async sendTelemetry(telemetry: Telemetry): Promise<void> {
   const request = this.createTelemetryRequest(telemetry);
   await this.httpClient.send(request);
 }
 private createTelemetryRequest(telemetry: Telemetry): Request {
   return new Request('http://backend.server.com/telemetry', {
     method: 'POST',
     headers: { 'Content-Type': 'application/json' },
     body: JSON.stringify(telemetry),
   });
 }
}

At first glance, this seems fine. We’re encapsulating the logic of creating and sending the telemetry request in a clearly-named and interface-adherent class, and the wrapper library can implement the injected IHttpClient however it needs to. 

But what happens if we want to add a particular field to the Telemetry data for a particular platform? Or what if we have to change the HTTP request due to a platform limitation? If the methods were protected rather than private, the platform library would be able to leverage them with as much flexibility as needed.

// Platform library
import { HttpTelemetryClient } from 'core-library';
export class TelemetryClient extends HttpTelemetryClient {
 protected override createTelemetry(context: IContext): Telemetry {
   const telemetry = super.createTelemetry(context);
   telemetry['platform_specific_field'] = 'data';
   return telemetry;
 }
 protected override createTelemetryRequest(telemetry: Telemetry): Request {
   const request = super.createTelemetryRequest(telemetry);
   request.headers.set('Platform-Specific-Header', 'value');
   return request;
 }
}

Group options into objects with clear types for overriding defaults.

TL;DR: Using clearly-defined options object types for parameters offers clarity, simplicity, and extensibility.

Dependency injection and customizable implementations were both central features of the core library. If low-level abstractions have one or two dependencies, high-level abstractions have far more. Injecting these dependences as individual constructor parameters quickly became unsustainable.

Let’s assume our CoreFunctionality class depends on IUrlUtils, IBase64Utils, ITelemetryClient, and ILogger. The first two need to be provided by the platform library, and the second two are optional, since we have default implementations for them. We could write our CoreFunctionality constructor to accept each of these dependencies individually, like this:

// Core library
export class CoreFunctionality implements ICoreFunctionality {
 constructor(
   urlUtils: IUrlUtils,
   base64Utils: IBase64Utils,
   telemetryClient?: ITelemetryClient = new DefaultTelemetryClient(),
   logger?: ILogger = new DefaultLogger(),
 ) {
   // ...
 }
}

But hopefully it’s clear that this method creates a number of different issues.

First and foremost, this is a compatibility nightmare. What if we need to add ICryptoUtils or remove the ITelemetryClient? Any change to the core functionality could result in a change to the constructor’s signature. If the platform libraries call this high-level class constructor directly (which, chances are, they do), it means a change to the platform libraries every time the core functionality changes. Yikes.

Second, this makes optional dependencies clunky and unintuitive. It might not seem like a big deal in the core library, but what if a certain platform needs to use a custom logger and the default telemetry client? We would need to do something like this:

// Platform library
import { CoreFunctionality } from 'core-library';
import { PlatformUrlUtils } from './PlatformUrlUtils';
import { PlatformBase64Utils } from './PlatformBase64Utils';
import { PlatformLogger } from './PlatformLogger';
const coreFunctionality = new CoreFunctionality(
 new PlatformUrlUtils(),
 new PlatformBase64Utils(),
 undefined, // what is this??
 new PlatformLogger(),
);

To solve this issue, we can combine all the dependencies that any class might need into a single options parameter. This allows for a lot more flexibility, especially when new optional dependencies are added. Plus, if options fields (e.g., urlUtils, base64Utils) are named consistently across different classes, then passing options down the abstraction chain becomes much, much easier.

// Core library
export type CoreFunctionalityOptions = {
 urlUtils: IUrlUtils;
 base64Utils: IBase64Utils;
 telemetryClient?: ITelemetryClient;
 logger?: ILogger;
};
export class CoreFunctionality implements ICoreFunctionality {
 constructor(options: CoreFunctionalityOptions) {
   // ...
 }
}

Write unit tests.

TL;DR: One less bug in your core library means one less bug in all the platform libraries. When you thoroughly test your core library, you’re also testing every library that depends on it.

Unit testing is important; there’s no doubt about that. But in order to write unit tests, your code needs to be testable. Thankfully, all the principles of dependency inversion, abstraction, and encapsulation make our code easily testable.

We use the mocha and chai frameworks and subscribe to the arrange-act-assert paradigm, but feel free to do whatever works for you.

One suggestion I will make is that when writing tests for the core library, different implementations of the same interfaces should pass the same exact tests. For example, let’s say we have the following interface in our core library:

// Core library
export interface IBase64Utils {
   base64Encode(str: string): string;
   base64Decode(str: string): string;
}

Since different platforms have different encoding/decoding capabilities, our core library provides three different implementations that platform libraries can choose from:

  • AtobBase64Utils, which uses the Web API atob and btoa functions
  • BufferBase64Utils, which uses the Node.js Buffer class
  • JSBase64Base64Utils, which uses the js-base64 NPM package
// Core library (tests)
const runBase64UtilsTests = (name: string, base64Utils: IBase64Utils) => {
   describe(name, () => {
       describe('base64Decode', () => {
           // all tests for decoding
       });
       describe('base64Encode', () => {
           // all tests for encoding
       });
   });
};
const base64Utils: Array<{ name: string; utils: IBase64Utils }> = [
   { utils: new AtobBase64Utils(), name: 'AtobBase64Utils' },
   { utils: new BufferBase64Utils(), name: 'BufferBase64Utils' },
   { utils: new JSBase64Base64Utils(), name: 'JSBase64Base64Utils' },
];
base64Utils.forEach(({ name, utils }) => {
   runBase64UtilsTests(name, utils);
});

This code follows the DRY principle since we only need to write the base64 encoding and decoding tests once to test all three implementations. Additionally, reusing the same tests for all three implementations ensures not only correct behavior, but also consistent behavior.

Spread the Word