Perspective on risk management systems for Customs administrationsBy Chris Thibedeau, Chief Executive Officer, TTEK Inc.
As a vendor who designs and deploys border processing systems, we have seen many Customs and border administrations make significant investments towards optimizing and modernizing their border processing capabilities and methodologies in an effort to meet the demands of today’s fast-paced international trade environment.
Some administrations build, for example, Single Window environments with automated control mechanisms for licences, permits, certificates and other documentation (LPCO) required by other government agencies (OGAs). But while this can help reduce release times and promote trade facilitation, the other key variable involves the introduction of an effective risk management regime.
Selectivity is key
The landscape of risk management technology solutions for Customs administrations varies widely. In some instances, we find smaller economies and lower GDP nations not using risk-based decision making at the border. The approaches used instead prompt for random inspections or the use of rudimentary approaches driven by older unsophisticated systems.
In addition, we see many countries embrace border decision making where “selectivity” is being driven by random selection and/or percentage assignment, using red, yellow, and green lanes within their current trade platforms. Therefore, we should question whether these systems allow for effective controls.
Random assignments for inspection are simply not effective, yet have emerged as a way for many Customs administrations to manage the growing volume of trade arriving at their borders. To compound the problem, some widely used systems automate this obsolete approach to border management, and refer to it as a “risk management module.” It simply would not be appropriate to call this type of functionality a “Risk Management Module,” and yet this is often the case.
Conversely, in some instances, we find that Customs officers are often prompted to review and/or inspect all cargo with an equivalent degree of scrutiny. We should recognize that risk aversion in these situations defeats the principles of risk management and can hinder efforts to facilitate pre-approved and/or low-risk trade. In our view, this approach has the following shortcomings:
- costly in resources as it applies the same degree of intensity to all threats;
- constrained in that it forces a lower degree of inspection intensity overall due to the uniform treatment of all cargo and passengers;
- creates a high incidence of officer error due to a greater workload;
- realizes fewer enforcement results – while some may expect high levels of inspection and intervention to yield exceptional results, evidence and experience suggest otherwise;
- encourages normally law-adhering entities to circumvent the system, in order to hasten the cross-border transit of their goods;
- creates opportunities for criminals to circumvent and avoid interdiction by making Customs reactions predictable;
- slows the supply chain down, and hinders economic growth;
- does not scale;
- fails, ultimately, to achieve efficient, secure border management.
Based on the two examples provided, we might conclude that inspecting all shipments, or inspecting some using random percentage assignment, are simply not effective controls, nor does it support the principles of risk management.
Based on the diagnostics conducted to date by intergovernmental organizations globally, many experts recommend that Customs administrations (1) further cultivate selectivity by using sound automated risk analysis and targeting, (2) conduct post-clearance audits to monitor system processing and adjust risk profiles, and (3) integrate new expert systems for anti-smuggling.
Let me explain what we mean by expert systems here. This market industry categorizes risk management systems by the manner in which they exploit data. Expert systems include functionality for deductive logic (lookout and watchlist vetting), inductive logic (identification of risk profiles, anomalies, and intelligence indicators using risk scoring), and predictive modelling (automated historical trend analysis to derive predictive models and scenario-based targets). Each of these methodologies is explained later in the article.
While we design and deploy risk management systems for Customs and border agencies, we believe it’s important for our clients to tell us which probable threats they are trying to prevent. Some countries place a focus on threats to health, safety, and security, while others focus on fiscal-related threats that include smuggling, undervaluation, misclassification, and misdeclaration of origin.
If the goal is to achieve economic prosperity, it seems clear that this can be realized through effective targeting and inspection controls, while simultaneously promoting pre-approved and low-risk trade. The right data, robust analytics, and a sound decision-making framework can provide Customs and a sovereign State with the confidence that the right decisions are being made strategically, operationally, and tactically by their officers at the border.
As many of our clients still place a large focus on revenue leakage on imports, revenue evasion is only one of many threats occurring at the border. Others include security, narcotics, sanitary and phytosanitary safety, health, agricultural and environmental impact, commercial disruption, chemical weapons precursors, dual-use goods, prohibited items, weapons and ammunition, intellectual property, endangered species, antidumping, and more.
Today, these various pending threats are not systematically analysed or managed by Customs administrations and OGAs, with which Customs must coordinate and make interoperable clearance and release decisions on goods at the border.
When one analyses a Customs and border administration’s current state of selectivity, inspection, and overall risk management approach, it becomes clear that there is very limited use of data at the disposal of Customs officers, even though the data is there. As a result, many systems simply do not provide actionable insight to officers in the field.
Larger and modern economies often seek to improve their analytics capabilities with additional data and advance commercial information. While this includes traditional declarations and cargo reports, the data set is often enhanced with additional supply chain data to improve end-to-end supply chain visibility, including bayplans (ship stowage map), container status messages, conveyance reporting, and more.
This additional data and advance commercial information allows a Customs administration to begin “virtualizing” the border, and has spawned centralized analytical units, called National Targeting Centres or NTCs.
Three types of analytical approaches
We believe there are three analytical approaches for data exploitation that can be applied separately or together: the deductive, the inductive, and the predictive approach.
The first level approach used by many embraces “Deductive” reasoning, which is based on generalized principles that are known to be true and form a specific conclusion. Watch list vetting or OGA commodity targets use a deductive logic. This is because the scientific conclusion or intelligence work to corroborate and predict an outcome has already been performed by another user group. Customs is simply using the information to flag the data when it emerges from the system.
Most risk management systems begin with an initial deductive approach, which is seen as the initial layer of risk assessment. Here is an example: 40 barrels of “chemical cleaning agents” are imported in a 20-foot “dry van” container. The tariff classification code identifies the product as “Arsenic Trichloride,” which falls under the Australia Group Chemical Weapons Precursor List. The container is then detained, pending an investigation on the consignee and delivery address, as well as a potential permit violation.
In the above-mentioned example, Customs administrations have already pre-determined that the commodity is a potential threat. As such, this becomes a simple vetting process against inbound data. When the information is presented, the transaction is flagged to an analyst for action.
The next layer of analysis, called “Inductive” reasoning, moves from specific instances to a generalized conclusion. Successful targeting systems employ a process of triage (determining priority) to eliminate low-risk shipments from being viewed, with a narrowing process that continues to lead to an outcome that may suggest the pending threat is a smuggling event.
Inductive systems use risk engines or rule management systems to run risk indicator rules against a data set. A risk scoring logic further ranks transactional data in order of risk (high/medium/low), which then facilitates an analytical triage for operational decision support.
This triage is then performed to select shipments of interest for closer scrutiny or inspection that could include a documentation check, a physical inspection, or both. Much like a doctor completing a diagnosis following tests, the results of a Customs examination should be collected in real time and used to validate the reasons for selectivity. This ensures that the system is always updated, and pulsing with the latest smuggling threats and trends.
The following is an example of a targeting system that uses inductive logic: a container vessel “Northern Celebration” is destined to arrive in port in the next 48 hours and files a cargo manifest. The system scores the transaction as 163pts (red/high risk) due to the following:
- Place of receipt = Source country for narcotics.
- Port of loading = Source country for narcotics.
- Container transhipped/re-handled in a port with weak security measures.
- Commodity = Known cover load.
- Commodity inconsistent with container type.
- Gross weight is less than 63% of the maximum payload.
- One-to-one relationship between shipper and consignee.
- Delivery address = P.O. Box number.
Customs refers the container for a full de-stuff due to the suspicions presented within the data. The cargo inspection is non-resultant, yet upon closer scrutiny of the internal reefer unit, 61 kg of heroin/opiates are found concealed within it. Customs then seize the drugs and attempt a controlled delivery in cooperation with the police.
A few systems operate in this manner today, including the US Customs and Border Protection’s “Automated Targeting System” (ATS), the Canada Border Services Agency’s “TITAN” system, and the US Navy’s “Computer Assisted Maritime Threat Evaluation System (CAMTES).” Building a rule library is key to help derive risk-based decision making in an organization, in order to conduct inductive analytics and generate/share alerts based on a set of predefined and user customizable rules.
It has taken our firm several years to develop and accumulate what now totals in excess of 65,000 proprietary rules, working across many threat types, for narcotics, illegal migration, security, intellectual property rights infringements, and revenue leakage.
Finally, let’s consider the third level of analysis, which we refer to as “predictive modelling.” A predictive model draws upon all available historical data, and forms a relationship with the data on file linked to historical seizures, penalties, forced payments, enforcement actions, and other resultant inspections. This analytical tooling provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, etc.) and graphical techniques.
Once the predictive model is applied to a large volume of data (e.g., more than five-plus years), the model should be re-run on the inbound data (i.e. all data reported on file in the last 24-48 hours), and any shipments that are deemed a match should be flagged and referred for closer scrutiny or inspection.
While healthcare and financial industries are embracing an approach that uses machine learning and artificial intelligence, we see very few Customs administrations embracing this form of analytics in an effective manner.
However, there are many challenges in this area. They include establishing a data warehouse environment of historical transactions, baselining and an agreement on a definition of common enforcement actions with relevant stakeholders, and automating the re-running of the predictive model on inbound data as a machine learning process.
Let’s give an example scenario using predictive analysis. In 2019, Customs and seven OGAs determine all significant enforcement actions to include:
- cargo control violations exceeding a penalty of 1,200 US dollars;
- smuggled goods exceeding 1,200 US dollars in the evasion of duties and taxes;
- all narcotic seizures;
- all monetary seizures greater than 10,000 US dollars in cash;
- all weapon and ammunition seizures;
- all CITES infractions;
- all fraudulent LPCOs;
- all seizures of prohibited items.
A data environment is established to store the last seven years of import declarations and cargo reports. Data Scientists then establish a predictive model, using a quantitative approach for historical trend analysis. Technologists then architect a process to re-run the model against all inbound data (about 14,000 transactions on file in the last 24 hours).
Two in-transit containers are identified as a match and flagged to the Customs analysts on shift at the NTC. The analysts refer the two 20-foot containers for inspection. Scanner teams image the contents of both containers at the port of arrival. One container has an anomaly in the image near its front wall. Upon offload, the container is measured and found to be only 18-foot in length. Upon closer scrutiny, the container appears to have a false wall and 2-foot void. The wall is dismantled and reveals 749,000 US dollars in cash.
A predictive approach uses quantitative analytics to determine a model. The statistical and mathematical process far exceeds what any normal person is capable of, and adds an extremely powerful layer of analytics that many believe is the future of risk management for border agencies. Today, few countries are currently embracing machine learning and artificial intelligence, using predictive modelling.
We believe in a “stepping stone” process designed to advance the capacity and maturity of a Customs administration. This begins with a deductive logic, which is then layered with an inductive framework of rule sets and risk scoring, and is then finally wrapped with a predictive modelling capability to automate outcomes for historical trend analysis. The result is an ensemble model we believe can more accurately derive real-time threats better than any other systematic approach available today.
Our risk management system is developed on these fundamentals and further includes a field reporting application for phones and handhelds to collect the results of inspections, whether non intrusive or fully intrusive in nature. By collecting the right data at the right time, our framework dynamically tunes the scoring for our rule sets in real time, to ensure the system is always pulsing with the latest smuggling threats and trends.
Resultant inspections (seizures, penalties, warnings, etc.) automatically increase the score for the risk indicator rules that influenced the referral; whereas non-resultant inspections utilize a decaying formula to decrease the scores for those risk indicator rules that fired. This proprietary framework methodology essentially automates post seizure analysis and the activities performed by strategic analysts and risk management committees today.
When you merge this dynamic risk management functionality with electronic Single Window platforms, the result is an extremely powerful tool for coordinated border management between Customs and other government agencies. This strengthens security and border controls, prevents or helps to recoup lost revenue, and decreases release times at the border, resulting in measurable improvements to trade facilitation. This is where Customs administrations can focus and get the biggest bang for their buck.
For companies like us, we need assistance from the WCO, the WTO, UNCTAD, the World Bank, and other intergovernmental organizations, to allow us to strengthen the current systems in place. No one, including us, is interested in replacing the current trade platforms. Rather, we believe these systems should be retained as the system of record for trade filing, accounting, release, and clearance processing. At the same time, we would like the opportunity to enhance and strengthen these systems for their member countries, to ensure they are modernized with the most leading-edge technology and approaches available.
All Customs administrations should develop robust targeting and selectivity solutions, which become the core decision-making tool to identify high-risk commercial shipments before arrival at the border. These systems should leverage lookouts, watchlists and alerts, as well as configurable business rules, such as known profiles and intelligence or risk indicators, as well as scenarios formed through predictive modelling, machine learning, and artificial intelligence.
However, the confusion and rudimentary functions introduced by less sophisticated systems has, unfortunately, delayed maturity to a more suitable solution. The better “mousetrap” is often never realized, leaving developing countries using obsolete approaches and technology. It would, therefore, be prudent for intergovernmental organizations like the WCO to develop a modern baseline set of requirement guidelines for its Members to adopt when seeking risk management systems. Without this functional checklist, the better mousetrap never emerges.