Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
April 16, 2024 04:03 am GMT

How We Made A Commerce Admin Site

This is a translated post of the original article: https://techblog.woowahan.com/15084/

Note: Baemin is a popular food delivery app in Korea.

The Bamin Commerce Web Front Development Team, which is in charge of the overall Bamin Commerce Web, is a team of more than 20 front-end developers. We are developing Bmart, Bamin Store services, and Admin for each service. I am currently working with 10 developers in the platform part that mainly develops Admin.

This article introduces how we have developed Admin in the platform part over a year since the birth of Baemin Commerce Web Admin.

The Birth of Commerce Admin

In December 2021, Baemin Store, a commerce service that allows sellers to register and sell products themselves, launched its first pilot service in Gangnam, Seoul. In the early days of Baemin Store's opening, there were seller-admins for sellers, but the number of menus was significantly smaller than now, and there were operational difficulties such as using Google questionnaires due to the lack of a self-built entry process. By organizing various problems and improvements, we have completely created a new commerce platform (operator-admin, seller-office) web-admin to improve the service experience in early 2022.

We chose MultiRepo for faster growth

Note: MultiRepo means having multiple repositories for a single project.

Since Baemin Store is a service that is already open and operating to users, we had to speed up the development of Admin above all else. So I chose MultiRepo. MultiRepo does not have dependencies on other repos, so it has the advantage of being able to develop quickly. Through the MultiRepo method, each project was able to create its own common components and grow them explosively.

Limits of multirepo

Although the shape of the admin was established to some extent and the development environment and CI/CD for each project were figured, the multirepo structure had the following problems.

  • Redundant development of common components (UI components, custom hooks, devOps) by project
  • When the package version of a third-party library being used by one repo changes, compatibility issues arise with other repos using the module for that repo
  • The inconsistent DX experience that can result from each project using its own convention or instruction set due to high autonomy.

The main problem we thought was the management of UI components. At that time, it was not a library at the design system level, but a common UI component, and the entire UI component was copied to each project and used without putting it on the package store so that it could be developed quickly and applied to the user.

There might have been a way to maintain the multirepo method for quick development right now, but we need to stop growing projects and come up with an overall solution! So we decided to pause the development of the admin page for a while and divide the resources within the team into two groups to decide on the action item below for about two weeks.

Packaging Common Components:
Integrate and package common components distributed across projects

Converting multirepo to monorepo:
Note: monorepo means having a single repository for multiple projects.
Combine three Admin projects into one repo and make packaged common components available for each project

Packaging Common Components

When packaging common components, the focus was on redefining the package structure.

  • Components
    • Before: All UI components except Form are flat All logic exists inside each component with or without context
    • After: Separated into layers based on the presence or absence of context such as base, core, etc For UI components, separate sub-components to establish an interface that can limit some design extensions Form component configuration that combines react-hook-form
  • Utils Before: hooks, config, util, lib created for each project After: all in one project
  • Build
    • Before: (None)
    • After: Set up ESLint and TypeScript Compiler for smooth code sharing within the monorepo

There are two reasons for redefining the package structure.

  • Prepare in advance to separate packages from your project (where you use it).
  • Pre-configured boilerplates that can be used as a reference when adding packages in a monorepo in the future.

Multirepo to monorepo

Nx, Turborepo, and Lerna are some of the building system tools for configuring Mono Repo. These tools provide many features, such as distributed caching and incremental builds, but at the time, it was determined that the platform repository was still an early-stage project and did not require the application of the Mono Repo build system.

What you need right now is a shareable source code

As a way to configure a per-project dependencies tree for code sharing, you can easily use the workspace feature of the package manager. To configure monorepo using workspace, we adopted pnpm as package manager.

We thought the common components added to the package would be placed in the workspace with the project, and without a separate build, we would be able to share the code more easily than we thought by designating the relative path from each package to a specific package via the root of the monorepo. However

Naturally, most of the common components in the package are made up of React, but since they were not built, they could not be used as they are in the project without being transpiled. Also, due to the nature of pnpm, which has non-flat node_modules, there was a problem that when bundling a project, the actual file did not exist in the node_modules inside the package, so it would not run in the deployment environment.

Unlike npm and yarn, pnpm does not pull up all installed packages directly under the node_modules folder.

We used the following methods to solve both problems.

  • Module transpiling nextjs project : using next-transfile-modules plug-in(Integrated in Next.js v13.1) CRA Project : Extend webpack settings using craco
  • Setting deployment file Write a script to extract the bundled project and the build artifacts of the packages used by the project Run the corresponding script in the deployment pipeline

Results of converting to monorepo

Following is a view of the workspace divided into packages and two projects.

Image description

There are two main achievements by switching to monorepo:

  • As the package and the project are clearly separated, they can give a clearer role and responsibility in the distribution of tasks.
  • Maintain a consistent history by managing changes to multiple projects in a single repo, with improved code review environments.

In addition, it has helped a lot in the growth of individuals, such as experience in designing UI components and configuring the pnpm monorepo environment.

Despite the busyness of business development, we suspended all development for two weeks and carried out the monorepo conversion work. And it paid out.

Making the admin more solid

After switching to monorepo, months passed quickly, focusing on improving UI component functionality and developing business requirements. As the admin page has stabilized to some extent within the team, there have been some work adjustments to refocus on service webview development, but about ten people were still committing to develop Admin every day, and I felt that the amount of code was really increasing day by day.

Rediscover the problem

The code sharing that we've configured earlier doesn't build. It's very convenient to immediately reflect changes in packages in a local development environment to a project local execution environment, but as the amount of code increased, hot module replacement (HMR) began to slow down, which soon became a bottleneck for business development.

In addition, it was cumbersome to install the same external package versions directly into every project to maintain dependencies version compatibility.

To solve this problem, are as follows.

  • To separate the code ranges between each package, each package should be built independently.
  • Build orchestration is constructed to effectively manage the complex dependencies between packages.

Beyond code sharing: library

0. Establishing strategies for library management

Configuring a package independently means that a package's location is not limited to a workspace within a monorepo, but can also be deployed to an on-premise package repository as a versioned library. As a result, we needed to plan ahead on how to operate it when we deployed it as a library apart from the build version.

Without this strategy, anyone can easily and freely add and distribute libraries or packages without any restrictions, but the more libraries or packages are randomly generated without any criteria, the more management points and the more difficult it is to integrate code in configuring build orchestrations.

After many discussions, we were able to define the library hierarchy and establish operational strategies as follows.

Image description

  • Dependencies flow direction: External library <- Internal library <- For sharing only <- Package

Additional tasks were performed to operate the library, such as applying semver for versioning, branching strategies for external/internal libraries separated into separate repositories, and automating deployments with canary deployment.

1. Build the package

There are options to build each package by configuring independently, and we decided to use vite, a bundling tool.

Vite means "quick" in French and it is a build tool created with a focus on the experience of developing a fast and concise modern web project.

Vite's distinctive feature is that, as its name suggests, build and local drive speeds are extremely fast, as it provides HMR using Native ESM.

vite supports HMR. This is to use ESM, not bundler. When a module is modified, vite only replaces the part that is related to the modified module, and when the browser requests it, it delivers the replaced module. It uses ESM perfectly throughout the entire process, so even if the app size increases, it does not affect the renewal time, including HMR.

In order to make it easier to configure build kits using vite and other tools, we pre-migrated the existing React version of Admin to 18. React JSX-runtime's ESM support started with React 18. The build kits configured are as follows.

Image description

// package.json{  "name": "@packages/shared",  "version": "0.0.0",  "license": "MIT",  "main": "./dist/index.cjs.js",  "module": "./dist/index.esm.mjs",  "types": "./dist/index.d.ts",  "exports": {    ".": {      "import": "./dist/index.esm.mjs",      "require": "./dist/index.cjs.js"    },    "./index.css": "./dist/assets/index.css"  },  "scripts": {    "build": "vite build && tsc -p tsconfig.build.json && tsc-alias -p tsconfig.dts.json",    "test": "echo \"Error: no test specified\" && exit 1"  },  "dependencies": {    "react-hook-form": ">=7.27.1 <7.32.0"  },  "devDependencies": {    "@babel/core": "^7.18.0",    "@swc/core": "^1.3.24",    "@types/node": "16",    "@types/react": "18.2.13",    "@types/react-dom": "^18.2.6",    "@vitejs/plugin-react": "^3.1.0",    "prettier": "^2.5.1",    "react": "18.2.0",    "react-dom": "18.2.0",    "react-query": "^3.34.16",    "rollup": ">=3.0.0",    "rollup-plugin-swc3": "^0.8.0",    "tsc-alias": "^1.8.2",    "typescript": ">=4.6.3 <4.7.0",    "vite": "^4.0.4",    "vite-plugin-dts": "^1.7.1",    "vite-plugin-static-copy": "^0.13.0",    "vite-tsconfig-paths": "^4.0.5"  },  "engines": {    "node": "16",    "pnpm": "7"  },  "volta": {    "node": "16.14.2",    "pnpm": "7.30.0"  },  "peerDependencies": {    "@types/react": "18.2.13",    "@types/react-dom": "^18.2.6",    "react": "18.2.0",    "react-dom": "18.2.0",    "react-query": "^3.34.16",    "rollup": ">=3.0.0"  }}
// vite.config.jsimport { resolve } from 'path'import react from '@vitejs/plugin-react'import { swc } from 'rollup-plugin-swc3'import { defineConfig } from 'vite'import { viteStaticCopy } from 'vite-plugin-static-copy'import tsconfigPaths from 'vite-tsconfig-paths'import pkg from './package.json'const makeExternalPredicate = (externalArr: string[]): ((id: string) => boolean) => {  const excludeStorybooks = /\.?stories.ts(x)$/  const excludeTests = /\.test.ts(x)$/  const externalPackagesRegex = externalArr.length === 0 ? null : new RegExp(`^(${externalArr.join('|')})($|/)`)  return (id: string) => {    return (excludeStorybooks.test(id) || excludeTests.test(id) || externalPackagesRegex?.test(id)) ?? false  }}const externals = makeExternalPredicate(Object.keys(pkg.peerDependencies))export default defineConfig({  plugins: [    react({         // @vitejs/plugin-react-swc  @vitejs/plugin-react   jsxRuntime   . (       .)         jsxImportSource: '@emotion/react',    }),    tsconfigPaths({ root: './' }),    viteStaticCopy({      targets: [        {          src: 'src/**/*.css',          dest: 'assets',        },        {          src: 'src/**/*.woff2',          dest: 'assets',        },        {          src: 'src/**/*.ttf',          dest: 'assets',        },      ],    }),  ],  build: {    sourcemap: true,    lib: {      entry: resolve(__dirname, 'src/index.ts'),      name: 'Lib',      formats: ['cjs', 'es'],      fileName: (format) => {        switch (format) {          case 'es':          case 'esm':          case 'module':            return 'index.esm.mjs'          case 'cjs':          case 'commonjs':            return 'index.cjs.js'          default:            return 'index.' + format + '.js'        }      },    },    rollupOptions: {      output: {        interop: 'auto',      },      plugins: [swc()],      external: externals,    },  },})
// [root] tsconfig.json{  "compilerOptions": {    "target": "es2015",    "lib": ["dom", "dom.iterable", "esnext"],    "allowJs": true,    "skipLibCheck": true,    "strict": true,    "forceConsistentCasingInFileNames": true,    "noEmit": true,    "esModuleInterop": true,    "module": "esnext",    "moduleResolution": "node",    "noImplicitAny": true,    "resolveJsonModule": true,    "isolatedModules": true,    "jsxImportSource": "@emotion/react",    "incremental": true,    "strictNullChecks": true,    "noImplicitThis": false,    "allowSyntheticDefaultImports": true,    "noFallthroughCasesInSwitch": true  },  "exclude": ["node_modules"]}
// [project] tsconfig.json{  "extends": "../../tsconfig.json",  "compilerOptions": {    "baseUrl": "src",    "typeRoots": ["node_modules/@types", "types"],    "jsx": "react-jsx"  },  "include": ["src/index.ts", "**/*.ts", "**/*.tsx"],  "exclude": ["node_modules"]}
// [project] tsconfig.build.json{  "extends": "./tsconfig.json",  "compilerOptions": {    "outDir": "./dist",    "declaration": true,    "emitDeclarationOnly": true,    "noEmit": false  },  "include": ["src/index.ts", "**/*.ts", "**/*.tsx"],  "exclude": ["node_modules", "**/stories.tsx", "**/*.stories.tsx", "**/*.test.ts", "./*.ts"] }
// [project] tsconfig.dts.json{  "extends": "./tsconfig.json",  "compilerOptions": {    "outDir": "./dist",    "baseUrl": "./dist"  },  "include": ["dist/index.ts", "dist/**/*.ts", "dist/**/*.tsx"],  "exclude": ["node_modules", "./*.ts"]}

In addition to tsconfig.json for supporting JS trans piling and IDE environmental tasks, we separated it into tsconfig.build.json for DTS file extraction and tsconfig.dts.json for relative path switching along with running tsc-alias. Afterwards, we ran build script vite build && tsc -p tsconfig.json & tsc-alias -p tsconfig.dts.json to capture the built and DTS files in the outDir folder.

Something special was that during the vite.config.js setting, the build.rollupOptions.output.interop option is set to auto, which is set to ensure interoperability between CJS ESM in emotion.

2. Build orchestration

Now the build kit makes it easy and fast to build packages. Now you just need to specify the version of the package you're actually going to use in the project and install it. If the version of the package has been updated while using it, the project just needs to update the version and reinstall the dependency.

However, the story is different for packages located in a monorepo workspace that do not distribute to private registry. It is not code-sharing as it used to be, so when the package is updated, it will have to be rebuilt. The more updated packages there are, the more likely the developer will be in the following situation.

Developer A: We should modify packages A, B and C!
Developer B: Okay modifications done!Let's rebuild the packages and start the project.
Developer A: Why is it failing? Hmm.. the package D is dependent to package A. We should build packaga D too.
Developer B: Finally done. But is it okay to manually build the packages depending on the dependencies every time??

We may express the above situation in code. We'll need to enter four command commands that can be executed in a single line as shown below.

# Build packages A, B, C $ pnpm --filter @packages/A build$ pnpm --filter @packages/B build$ pnpm --filter @packages/C build# Build package D$ pnpm --filter @packages/D build// Start the project$ pnpm --filter @projects/seller-admin start

To better organize the build orchestration, we decided to introduce Turborepo, which was considered a candidate for the build system tool at the time of the monorepo transition.

Image description

If you look at the picture above, we're running a single line of the turbo run test command to run a task for each workspace in the monorepo. It's a multitask feature in Turborepo. But what we wanted to do was, "When you run a project, build the packages that the project depends on first." Turborepo was providing these task dependencies.

To define task behavior between workflows in TurboRepo, you must create a turbo.json file. Tasks defined in the file run scripts within the package.json file while going through all the workflows unless you have a separate filter option.

{  "$schema": "https://turbo.build/schema.json",  "pipeline": {    "build": {      // A workspace's `build` command depends on its dependencies'      // and devDependencies' `build` commands being completed first      "dependsOn": ["^build"],    }  }}

If you look at a simple example of turborepo, the build declared in the pipeline runs the build script defined in the script within each workspace's package.json, but there is ^build in the dependentsOn value, which means two things:

  • The dependenciesOn means a set of dependencies tasks.
  • The ^ symbol means that you perform a task for a packaged workspace that is in the dependencies, devDependencies list that the workspace refers to.

In the above example, if the packages A, B, C, and D all exist in the dependencies of the project, you can see that the build command in the A, B, C, and D package workspace is executed first when the run pipeline is executed, and then the project is executed.

// actual turbo.json from the project{  "$schema": "https://turbo.build/schema.json",  "pipeline": {    "run": {      "persistent": true,      "dependsOn": ["^build"]    },    "build": {      "dependsOn": ["^build"],      "outputs": ["dist/**", ".next/**"]    },    ...  }}

The run pipeline runs when a project is run in a local development environment, and the build pipeline runs at deployment. Both pipelines have been set up to build the packages they rely on first through "dependsOn": ["^build"].

We've used TurboRepo to automate some of the task-dependent pipelines, but I'm still lacking something. If you've modified a file in another workspace that you're relying on in a project that you're running right away, I'd like you to detect it and build it automatically. (Like watch in webpack)

Unfortunately, Turborepo does not officially offer watch at this time. To implement this feature, you need to use a 3rd-party library like TurboWatch, or a cross-platform filewatch library like chokidar to implement the script yourself. For TurboWatch, it states that "unexpected issues may arise when used with TurboReport's dependentsOn", so we chose the latter method.

// watch.mjsimport { getWorkspaceRoot, getWorkspaces } from 'workspace-tools'import { watch } from 'chokidar'import execa from 'execa'import path from 'path'import url from 'url'const dirname = path.resolve(path.dirname(url.fileURLToPath(import.meta.url)), '../')const workspaceRoot = getWorkspaceRoot(dirname) ?? dirnameconst workspaces = getWorkspaces(workspaceRoot)const targets = workspaces.filter((w) => !w.path.startsWith('projects'))// NOTE: Map<workspace: WorkspaceInfo, file paths: Set<string>>const changedFilePathPool = new Map()const watcher = watch(  targets.map((t) => path.relative(workspaceRoot, t.path)).flatMap((p) => [`${p}/src/**/*`, `${p}/types/**/*`]),  {    ignoreInitial: true,    atomic: true,  },)const debouncedBuild = debounce(build, 1000)function main() {  console.log('Initiating watch for all packages and libraries')  watcher    .on('ready', () => console.log('watch task ready'))    .on('add', onChange)    .on('change', onChange)}main()function onChange(filepath, stats) {  const mappedWorkspace = targets.find((workspace) => path.join(workspaceRoot, filepath).startsWith(workspace.path))  if (mappedWorkspace === undefined) {    return  }  const prev = changedFilePathPool.get(mappedWorkspace)  if (prev === undefined) {    changedFilePathPool.set(mappedWorkspace, new Set([filepath]))  } else {    changedFilePathPool.set(mappedWorkspace, new Set([...prev, filepath]))  }  debouncedBuild()}async function build() {  const pool = new Map(changedFilePathPool)  changedFilePathPool.clear()  let logBuffer = '[list of changed files]
' for (const [workspace, filepaths] of pool.entries()) { logBuffer += `${workspace.name}
`
logBuffer += Array.from(filepaths) .map((p, i) => ` ${(i + 1).toString().padStart(2, '0')} : ${path.relative(workspace.path, p)}`) .join('
') logBuffer += '
' } console.log(logBuffer) const jobs = [...pool.keys()].map((workspace) => { return new Promise((res, rej) => { try { console.log(`[${workspace.name}]: start build
`
) const command = `pnpm tr build --filter=${workspace.name}` const child = execa.command(command) child.stdout.on('data', (chunk) => { console.log(`[${workspace.name}]: ${chunk}`) }) child.stderr.on('data', (chunk) => { console.error(`[${workspace.name}]: ${chunk}`) }) child.finally(() => res()) } catch (e) { rej(e) } }) }) await Promise.all(jobs) .then(() => { console.log('build complete') }) .catch((e) => { console.error(e) })}function debounce(func, timeout = 1000) { let handler = null return () => { handler && clearTimeout(handler) handler = setTimeout(() => { func() }, timeout) }}

The script above runs the pnpm tr build --filter=${workspace.name } command if a file change occurs through the chokidar file watcher. Running this script in node in conjunction with project execution completes the build orchestration.

Code Architecture

People (and especially developers) think differently, so if you work in your own style without setting any rules, you will see some effect at the beginning of the project, but as time goes by, you will also have a higher chance of having maintenance problems.

To avoid this, each dev team will define ways to help design and implement projects more efficiently, along with various constraints such as coding conventions, system architectures, and design patterns. In this discussion, we will introduce how Bamin Commerce Admin's architecture is structured.

What's an efficient architecture?

This is a question that any developer may have received at least once at a job interview. Rather than the stereotyped answer, I decided to think about what architecture is essential for Bamin commerce advisor when approached more realistically.

When you look at the business development process, it's usually done through the MVP Phase X process, so to speak, it's quickly developed, it opens up the minimum capabilities, and then it expands. If you compare it to development, it can be abbreviated into two keywords. Productivity and scalability, and with these two keywords, we've created the following necessary and sufficient conditions.

  • It will be easy for everyone to understand and organize (= productivity)
  • Each component of the architecture must have a clear distinction between roles and responsibilities (= scalability)

Image description
Platform architecture v.0

We decided to call each element that makes up the architecture map a layer. Each layer should be clearly separated, there should be a Data Transfer Object (DTO) between each layer, and data transfer will be carried out through DI. Each layer's role is as follows.

  • Layout layer (design system): the component elements controlled by design system
  • Usecase layer (project): processes the business data and delivers it in accordance with the props for the components in layout layer. Also, resolves and correspond with custom hooks in react.
  • Business layer (project): contains business logic and processes in-memory data
  • Persist layer (Backend for Frontend; BFF): Continuously checks and consolidates external data to pass them to the business layer

In this way, there are not that many layers(only 4), and the roles seem to be clearly separated. However, I was a little skeptical from the perspective of whether it was easy for anyone to understand and configure. Business and usecase. It seemed that the criteria for judging the range of the two layer roles would be ambiguous. How much business logic should have in the business layer and how far business logic is. In the usecase layer, if it was a role to transfer business data to the design system component, a function rather than a layer would be sufficient, but I think it is not necessary to divide it into layers.

Image description

Above is platform architecture version 1, which is a significant improvement over version 0. The design system is no longer treated as a layer, and the four layers are shaped like having something inside. The arrows also suggest the direction of dependencies. The role of each layer of the new platform architecture is as follows.

  • Design system: no longer treated as a layer, it controls the components within the design system
  • Web-service layer: UI-related logic, such as jsx and styling
  • Bridge layer: all hooks except for the Data and Domain layers
  • Data layer: composed with three modules - cache modules(for react-query state management), store modules(for global stores and browser storage), and API modules(requests data from server)
  • Domain layer: manages server request/response model

The biggest change over the version 0 is the addition of a layer called module. A module here refers to a collection of codes with unitary features and can be written as a hook in case of React as well as classes and functions. The module can be reused in multiple layers and has the advantage of removing many duplicate codes.

The platform architecture redefined the folder structure in which the layers will be located and provided each layer's usage examples and Bad/Good case code snippets as guide documents to satisfy one of the original roles of the platform architecture: easy to understand and configure. Through the FitStop event, one of the Bamin development cultures, the code architectures of existing Admin projects were all converted to new platform architectures.

Once you change the architecture map, the existing code automatically becomes legacy code. For a developer, legacy code is not a dream or hope. It's more like pain and despair. But if a little pain can improve development productivity in the future, wouldn't that be dreams and hopes? We go all the way to the third improvement with dreams and hopes, and that's what it is now.

Image description

It is almost identical to the version 1 architecture, but the Domain layer has been renamed to Model layer to make it more name-worthy, and the API module has been imported. Also, the commonly used monorepo's internal packages and modules were separated into separate groups within the layer.

In this way, the platform architecture map continues to develop through two improvements. Since it is an area where it is difficult to get 100% correct, we are trying to exchange opinions on usability and update it lightly on a quarterly basis.

Platform Boilerplate

As we worked on new business challenges, we often had to create new projects and build-orchestration packages (or libraries) within monorepo and add them to our workspace. New projects, packages were not difficult to generate by copying existing code, but configuring a viable, featureless, initial environment consumed considerable resources.

Business development had similar problems. When applying the previously defined platform architecture map to code, we often used existing code as a reference and copied, and unnecessary tasks continued to be repeated.

When creating a new project or package, the platform boilerplate was configured so that refined code could be automatically created under a batch of rules when creating architectural layers in the project.

Interactive Command Line Interface (Interactive CLI)

Platform monorepo is configured to run commands through interactive CLI using cli-select and chat without having to enter pnpm or turbo commands every time. The list of executions required for the interactive CLI is configured based on the script in package.json within the workspace, and a mapping JSON file is declared for each workspace to show the description corresponding to each script.

Project and package boilerplate configuration

To automate the creation of projects and packages, we created a boilerplate folder in the monorepo root, created environment files in the initial state of project and package, and wrote a node script that allows you to copy and paste both the folder and the file.

Image description

The projects.default subfolders and files in the image above are all copied and created additionally in the existing projects workspace. The elements required during this process (project name, title, required environment variables, subdomains, etc.) can be selected or entered by the user through the interactive CLI. Packages are created in the same way, including additional configurations that can be distributed to the library as soon as they are created, unlike projects.

Code Generator

It's literally an automatic generation of code. You can dynamically inject prompt questions/answers into templates made with the hbs extension using the microgenerator JS framework: plopjs.

// web-service-layer.hbsimport React from 'react'import { use{{name}} } from './hooks'import * as Styled from './styles'interface Props {  content?: string}const {{name}}: React.FC<Props> = ({ content = ' ' }) => {  // TODO:  eslint  .  // eslint-disable-next-line no-empty-pattern  const {} = use{{name}}({})  return <Styled._Wrapper>{content}</Styled._Wrapper>}export default {{name}}

It is an hbs template that creates components among the web service layers of the platform architecture. Use the name value entered from the user through the prompt in plopjs as the component name. Generators created through the plop.setGenerator function can be executed through the interactive CLI to create architectural layers easily and quickly.

Image description

Fast Issue Maneuver

In the case of platform administrators, if they do not respond quickly when issues arise in the operating environment, it can cause major problems to the operation of the branch of the sellers. Problems such as not modifying the product inventory or not modifying the settings when there are products that need to be excluded from the exhibition may eventually affect even customers of the Baemin app.

API Response Schema Validation

Platform Admin is self-verifying the data model schema that is answered when invoking the server API. If the Swagger response interface written in the OpenAPI specification is different from the actual response result, it causes errors in the runtime environment. Superstruct is used for schema verification.

Of course, runtime errors should not occur in the operating environment just because the server response value is different from the defined schema. The purpose of schema verification is to prevent issues in the operating environment in advance by only performing it in the development/beta environment.

Image description

The above type of server response field is passed on to the front developer through Swagger. Here, you can see that processStatus is a string type with four Enum values. In the frontend, the schema is verified through Superstructure as follows.

import { object, union, literal, assert } from 'superstruct'import axios from 'axios'const API_PATH = '...'// Superstruct schemaexport const Schema = object({  processStatus: union([literal('PENDING'), literal('PROCESSING'), literal('COMPLETED'), literal('FAILED')])})// API moduleexport const getAPI = async () => await axios.get(API_PATH).then((res) => {  if (process.env.APP_ENV !== 'production') {    assert(res.data, Schema) //    }  return res})

At the frontend, you can conditionally render button elements or layouts with the value of processStatus, or you may be controlling some state. But what if the server adds an unexpected Enum value and moves on to the actual response value. Of course, in most cases, if the server response specification changes, it will be shared in advance, but let's assume that sharing is missing for some reason this time.

Unintended behavior occurs because of course there is no action taken on the newly added Enum value at the front. The problem is that you may not even know the existence of these issues. Schema verification through Superstruct is very helpful in identifying issues because it causes runtime errors in this situation. Sometimes if a person who does not know the history meets the snack bar error as shown below while developing the page, you can find out what the error is without understanding the overall code flow.

Image description
If the 'SUCCESS' response value that is not defined in the processStatus value is crossed, it shows a snack bar error along with a runtime error.

However, too strict definition of schema validation can be a bottleneck for business development that is frequently added/changed. Currently, platform administration is flexibly performing verification by using superstructure.type, which allows null for structure field type or allows additional properties instead of superstructure.object for responding objects.

Sentry & Grafana (monitoring)

There are also ways to prevent issues in advance, such as using Superstruct, but more importantly, how quickly you can recognize them when they actually happen. In particular, it is more difficult to recognize issues on holidays or early morning hours when all members are off. Platform Admin is receiving real-time notifications by tracking errors with Sentry and checking the status of instances through Grafana monitoring.

Infra

Both platform administrators are developed on Next.js and organized by choosing one of the following two methods depending on the size of the project or the purpose of operation.

  • Web Application Server Based on SSR & CSR Rendering
  • Static Web pages based on CSR rendering

Image description
(The server infrastructure composition of the web application)

For web application servers, configure the application with Docker Compose on the EC2 instance, which is a private registry, which means a docker image repository.

Next, run the project build and deploy with pnpm deploy through GitLab CI, build with docker, and push to the repository. After that, run the Jenkins pipeline through Jenkins REST API. Jenkins creates AWS resources through Bamin's internal deployment system and lets docker containers run inside EC2 through Code Deploy. These web application server infrastructure configurations will be converted to deployment pipeline configurations through EKS Cluster in the future.

Image description
(Static web infra)

For static web deployments, it's much simpler. Since you don't need a web application server, Jenkins sends build artifacts straight to the S3 bucket. CloudFront's create-validation command also runs on Jenkins.

Remote caching

Setting up caching on GitLab CI/CD can improve package installation or project build speeds. However, such caching has its limitations, as the cache is located in the local machine and cannot be retrieved when the tasks are executed in a different environment. This can be resolved through distributed caching, but there are faster caching methods.

Image description

TurboRepo was previously utilized to apply build orchestration. TurboRepo supports remote cache, allowing you to share a single TurboRepo cache on the CI. TurboRepo supports remote cache provided by Vercel(the clouding computer company in the United States that developed Next.js), or creates and uses a remote cache environment through self-hosting. Because remote cache supported by Vercel is an external service, we decided to use self-hosting because it requires security review and costs money per user every month.

TurboRepo offers custom remote cache servers in open source. It provides a guide documentation to easily configure the server through Git or Docker images. We created and hosted a remote cache server and set it up in CI as below.

# turbo.sh (apply remote caching only in CI environment.)if [[ "$CI" = "true" ]];then  turbo --token commerce run $*;else  turbo run $*;fi# .turbo/config.json (This should be placed in the root directory and the below setting is a must){  "teamid": "team_webfront",  "apiurl": "https://turbocache.{HOST}"}# package.json (with below config, we can use 'pnpm tr' ... instead of 'turbo' command){  ...  "scripts": {    "tr": "sh ./turbo.sh",    ...  }}# .gitlab-ci.yaml (set the global variables for applying the remote caching in CI)variables:  CI: 'true'

TurboRepo's remote cache allows you to cache almost any tasks.

Image description

The Future of the Platform

So far, we've talked about a lot from the birth of Platform Admin to the present. Now, I'd like to introduce the future of Platform Admin. The future here refers to the technical goals to achieve in the platform in a year, not in the future in many years ahead.

Integrated Admin with Microfront Architecture (MFA)

Currently, most of the common components in Platform Admin are packaged or library-ized. That allowed us to get rid of a lot of duplicate codes. But there are still issues with deployment dependencies. If a UI library called @baemin/admin-footer is versioned up and deployed with the new version, it will not be reflected on the page until a new project using that library is deployed. The same will be true if you encounter problems with the UI library.

This method of configuring platform administration currently has a limitation in that code is eventually integrated into build time. In other words, it is impossible to fundamentally control situations where all users have to rebuild when multiple separate UI or modules are deployed.

Module Federation

But what about integrating a separated UI or module into runtime rather than build time? Webpack5 makes this possible through Module Federation.

Image description

It is not yet discussed to what extent the distribution unit will be split into the Admin component, but if you draw a rough configuration plan, it is as follows.

Image description

(left: current; right: MFA)

  • The Host Container manages the part corresponding to the header and sidebar of the admin page.
  • The pages are separated on a component-by-component basis and managed in dynamic remote containers.

The project no longer has all of the admin components. Only the page components to be shown can be separated into roles that expose. I think it can be an integrated sidebar/header that switches multiple different advisors in the host container. As such, we expect a lot of benefits from introducing MFA.

e2e integration

The Platform Admin already has a testing environment using playwright, an e2e testing tool. Through the web dashboard page, you can perform a full test for each admin and a menu test for each admin. However, it has been briefly turned to a lull due to physical reasons that make it difficult to convert more than 10,000 TCs per admin into test code.

Image description

Image description

I think we can write the test code on the new page with the help of ChatGPT, and add a feature that detects the changed commit and automatically updates the test code. Anyway, if all the test codes are filled, we expect to reduce all the cost of maintenance in the future as well as QA resources.

Currently, the boilerplate is simply generating code to create new projects or packages. When we create a project in the future, we aim to provide not only code but also infrastructure provisioning using coded infrastructure (laC), and logging system configurations such as sentry or grafana. I think the ultimate goal is to provide a front-end integrated CLI solution that can be utilized at the enterprise level.

Image description

Conclusion

Looking at it, I think we've done a lot of improvements over the past year. I think it's even more surprising considering that we've been working on business development at the same time. There are a lot of things that I haven't covered in this article, but we've done a lot of improvements, including large-scale work that reflects new design systems into existing projects, developing components exclusively for administrators, and building Preview Deployment, a deployment environment that allows you to preview your work.

All the team members have been with us from the concept to the numerous discussions and implementation results to build the platform administrator, and I just organized the results of the team members.

In addition to the Baemin Commerce Webfront development team, numerous related departments participated in the creation of Platform Admin. The experience of nearly 70 people gathering together to distribute one function is particularly memorable, and I think it is our team's role to take care of the platform admin, which contains all the efforts.


Original Link: https://dev.to/solleedata/how-we-made-a-commerce-admin-site-dh6

Share this article:    Share on Facebook
View Full Article

Dev To

An online community for sharing and discovering great ideas, having debates, and making friends

More About this Source Visit Dev To