How We Architected Our AI Classifier Around CBP's Legal Framework

How GingerControl built its HTS Classifier as a pre-classification research tool. GRI logic, CROSS ruling integration, and why most AI classifiers ask the wrong questions.

Chen Cui
Chen Cui7 min read

Co-Founder of GingerControl, Building scalable AI and automated workflows for trade compliance teams.

Connect with me on LinkedIn! I want to help you :)

How Should AI Classification Tools Be Structured Under CBP's Framework?

AI classification tools must operate as research resources, not entry-filing tools. CBP ruling HQ H350722 requires structural separation from entry workflows, research-grade output with GRI reasoning chains, and a licensed customs broker making the final classification decision. Tools that skip this architecture risk a § 1641 enforcement action.

Why Do Most AI Classification Tools Ask the Wrong Questions?

Most AI classifiers derive questions from HTS heading descriptions, which works for straightforward products but fails for gray areas involving GRI 3(b) essential character analysis. Effective classifiers ask use-case and market-context questions ("What is the primary reason a consumer would purchase this product?") because essential character is about function and purpose, not just material composition.


In Part 1, I broke down HQ H350722, CBP's first ruling on whether AI classification tools constitute customs business. The short version: if your AI tool is structurally connected to entry filing, it's customs business and requires a broker license. If it operates as a research and planning resource, it's permissible.

This post is about how that distinction shaped every architectural decision in GingerControl's Classifier, and what we think most AI classification tools are getting wrong.

Last updated: March 2026

We Didn't Retrofit. We Started Here.

This is not a case of reading the ruling and scrambling to add a disclaimer. We designed GingerControl's Classifier as a pre-classification research tool from the start, before HQ H350722 was published. The ruling validated our architecture. It didn't force us to change it.

That matters because retroactive compliance is fragile. If your product was built to feed classifications directly into entries and you bolt on a disclaimer after the fact, your architecture still pushes users toward the workflow CBP flagged. Structure beats disclaimers.

What Does the Classifier Actually Produce?

Our tool produces research reports, not entry-ready classification codes. Each report includes:

  • Full GRI reasoning chains. The tool walks through General Rules of Interpretation 1 through 6 in sequence, showing which rule resolves the classification and why.
  • Section and Chapter Note references. These are the legal notes that modify or override heading descriptions. Most tools skip them. We surface them because they're where classification disputes actually happen.
  • Relevant CROSS ruling citations. The tool pulls prior CBP rulings on similar merchandise so brokers can see how CBP has historically classified comparable products.
  • A disclaimer consistent with HQ H272798 and HQ H350722. Not decorative. Structurally enforced by the fact that the tool does not file entries, does not connect to entry workflows, and does not direct brokers on what classification to use.

The output is designed to make a licensed customs broker's job faster and more defensible. It does not replace the broker's judgment.

GingerControl's HTS Classifier follows GRI logic and asks clarifying questions before assigning a classification, producing audit-ready reports grounded in Section Notes, Chapter Notes, and relevant CROSS rulings.

Why Do Most AI Classifiers Ask the Wrong Questions?

Here's a problem almost nobody talks about: the quality of an AI classification depends entirely on what questions the tool asks. And most tools ask bad ones.

The typical approach is to derive clarifying questions from HTS heading descriptions. The tool reads the tariff language and asks whether the product matches specific terms. This works for straightforward products. It fails badly in gray areas where multiple headings seem plausible.

The reason is that heading-derived questions don't capture essential character under GRI 3(b). Essential character isn't about what a product is made of or what it looks like. It's about what the product is for in the hands of the buyer.

Our classifier asks use-case and market-context questions: What are customers primarily buying this product for? How is it marketed? What function drives the purchasing decision? These questions produce classifications that are more defensible because they reflect how the product is actually used and traded, not just how it reads against a static list of headings.

The Technical Stack: GRI Logic + AI + the Tariff

If you want the full technical deep-dive on how GRI logic, AI classification, and the tariff schedule work together, including how we handle mixed-material products, sets, GRI 3(a) specificity analysis, and Chapter 98/99 special provisions, I wrote a comprehensive guide:

The Complete Guide to HTS Classification: How AI Is Changing the Game

That piece covers the classification methodology at a level of detail that would make this post three times longer than it needs to be. Read it if you're building in this space or if you want to understand what's actually happening under the hood of AI classification tools.

How Should You Evaluate AI Classification Tools?

If you're evaluating AI classification tools, or building one, here's what CBP's framework demands:

Structural separation from entry filing. The tool cannot be the pathway through which classifications reach CBP. There must be a licensed broker making an independent decision between the tool's output and the entry.

Research-grade output, not entry-ready codes. The tool should produce reasoning, citations, and analysis, not just a 10-digit number. A broker who receives a research report can exercise judgment. A broker who receives a pre-filled classification field is being directed.

Meaningful disclaimers that match actual workflow. If your disclaimer says "consult a licensed customs broker" but your UX funnels users past that step, the disclaimer is decorative. CBP has already ruled that decorative disclaimers don't cure a § 1641 problem.

GRI-based reasoning, not pattern matching. If the tool can't show its work under the General Rules of Interpretation, the broker can't meaningfully review it. And if the broker can't review it, the broker isn't making the classification decision. The tool is.

Why This Matters for the Industry

The trade compliance space is at an inflection point. AI classification tools are going to become standard infrastructure. That's not a question anymore. The question is whether the industry builds them right or builds them fast and deals with the legal consequences later.

CBP has now told us exactly where the line is. The companies that respect it will earn the trust of the broker community. The ones that don't will find themselves on the wrong side of a § 1641 enforcement action, and they'll take their customers with them.

We chose to build on the right side of the line. If you're evaluating tools, make sure yours did too.


GingerControl is a pre-classification research tool. It follows the same reasoning process a licensed customs broker uses, including GRI analysis, Section/Chapter Note review, and CROSS ruling research, but the final classification decision benefits from professional judgment. GingerControl produces audit-ready documentation that supports the classification decision; it does not provide legal advice or replace licensed customs expertise. Try the Classifier

GingerControl is not just a tool. We work with importers and trade compliance teams on process consulting, digital transformation strategy, and end-to-end custom system development. Talk to our team


Read Part 1: We Built an AI Classifier. Here's Where CBP Drew the Legal Line

I'm Chen Cui, Co-Founder at GingerControl. We build AI and automated trade compliance systems for U.S. importers, exporters, and customs brokers.


References

[REF 1] CBP Headquarters Ruling HQ H350722 Data cited: Legal framework for AI classification tools, structural separation requirement, automation principle Source: CBP CROSS Ruling Database Published: January 16, 2026

[REF 2] CBP Headquarters Ruling HQ H272798 Data cited: Permissibility of general-purpose classification databases with meaningful disclaimers Source: CBP CROSS Ruling Database Published: 2017

[REF 3] CBP Headquarters Ruling HQ H290535 Data cited: Finding that specific subheadings for specific merchandise constitutes customs business Source: CBP CROSS Ruling Database Published: 2022

[REF 4] 19 U.S.C. § 1641 - Customs Brokers Data cited: Definition of customs business, licensing requirements, broker supervision obligations Source: U.S. Code

[REF 5] 19 C.F.R. § 111.1 - Definitions Data cited: Definition of "person," application to automated tools Source: eCFR

Chen Cui

Written by

Chen Cui

Co-Founder of GingerControl

Building scalable AI and automated workflows for trade compliance teams.

LinkedIn Profile

You may also like these

Related Post