Your supply chain's speed and efficiency depends on the flow of synchronized data between you and your partners. The first step: making sure your own data is clean and consistent.
When change occurs, the agile organization will spring into action, delivering a swift and effective response. The era of lumbering corporate giants has finally given way to the age of the nimble, adaptive competitor—one that considers agility to be both a competitive weapon and a corporate strategy. The more agile your organization, the new thinking goes, the better it will be able to handle such challenges as growth, structural change, globalization, and regulatory pressures. But how do you achieve agility?
A key prerequisite for becoming agile is to make sure all stakeholders within the enterprise share a common view of the data that they use to make business decisions. This enables all parts of your business to work together more effectively than if each part had its own view of the business and an independent plan of action. In order to do that, enterprises need to achieve a clean and consistent interpretation of master data, or standardized attributes that are common across multiple items—for example, price, size, location, and Global Trade Item Number (GTIN). Master data management (MDM) is a program that helps enterprises ensure that the quality of the master data that is used by all stakeholders within an enterprise remains high.
Article Figures
[Figure 1] Global data synchronization network conceptual topologyEnlarge this image
However, an enterprise is only one node within a value chain. As competition continues to increase, more enterprises are recognizing that their ability to become (or remain) agile depends heavily on their trading partners. If those trading partners stumble, an enterprise's potential breakthrough performance may not yield overall value chain improvements, and those solid improvements will end up being wasted. For an entire value chain to increase its agility or achieve extended breakthrough performance, semantic reconciliation of master data needs to take place across multiple enterprises. Known as "global data synchronization" (GDS), this program should incorporate any relevant industry standards and stretch across multienterprise business processes.
The benefits of such a program can be significant. Managing information more effectively across the value chain not only increases agility but also reduces costs, keeping information integration problems to a minimum and allowing everyone to work from a single consistent, consolidated, and authoritative version of the information. In addition, companies that share information across the value chain are increasingly deriving competitive advantage because they can marshal their assets, partners, and people to work in unison to outmaneuver business rivals.
Sharing data beyond the enterprise
Today there already are many examples of master data being extended or shared across trading boundaries:
In the automotive industry, data and semantics (the intended meaning of the data) are shared to support collaborative product design as well as multienterprise business processes related to forecasting and replenishment.
In multichannel retailing (where an enterprise sells the same product, sometimes to the same customer, via different business channels—stores, kiosks, direct/catalog sales, Web sites, and so on), data on products, customers, and locations must be aligned across all of the channels' business processes as well as with third-party logistics operators.
In the consumer goods industries, significant amounts of data are shared with trading partners in order to support dynamic business processes such as promotions, new-product introductions, and product substitutions.
In the food-service industries (where buying products is consolidated through a small number of companies that represent large numbers of national and local restaurant chains), trading partners require alignment of data for common items needed by multiple customers.
In both business-to-business and business-to-consumer commercial trading, many suppliers across many industries sell their products and services via a "browse and buy" catalog or service based on commonly accepted data definitions.
Yet in spite of all these data-sharing efforts, the poor quality of master data has become a hot topic for many industries. When you look at an enterprise as part of a wider value chain, the reconciliation of master data becomes much more complicated than it is internally. That's largely because the majority of the data that are consumed by an enterprise reside outside the enterprise. As a result, the enterprise lacks any formal control over those data or the associated external business processes. Therefore, internal enterprise information management and external global data synchronization are two core programs that should be part of any chaos-resilient supply chain management strategy.
An example from the consumer-goods industry
Although the approaches and the IT solutions may vary somewhat, the issues and general concepts of data synchronization apply across most industries. Two sectors that have made significant strides in advancing global data synchronization are retail and consumer goods. These efforts have grown out of an industrywide focus on improving product introduction and replenishment processes. A number of large global retailers—including Wal-Mart, Target, Carrefour, Metro AG, Home Depot, and Ace Hardware—have been working with their distributors and suppliers to develop governance models for the creation and maintenance of master data, industry data dictionaries and data models, and compliance and certification processes for those data directories.
The result of their efforts is what the industry calls the Global Data Synchronization Network (GDSN), which is managed by the not-for-profit global standards organization GS1. GDSN is a collection of enterprise and industry data repositories that are connected electronically in order to assure that product and attribute information is aligned across the value chain. It is focused on ensuring the semantic consistency of product data and other associated master data to improve business agility and to reduce waste.
GDS is achieved via the deployment of a large global product directory that is referenced by all buying and selling processes. Called the Global Registry, this directory serves as a very large "look-up" table of core data such as Global Trade Item Number and Global Location Number (which is used to identify legal entities, trading partners, and locations). The registry is accessed by local or remote data pools, which are repositories of both the core data and extended data (such as price) pertaining to products or commercial contracts shared between trading partners.
Sellers use their data pools to publish product data and other master data. Buyers use their data pools to subscribe to these data. Manufacturers and distributors (sellers/publishers) create and enrich these data (which can include packaging instructions, operating manuals, and warranties) throughout the product's lifecycle. Buyers consume and further enrich the data, even to the point of sharing it with consumers. These data pools can be hosted by data-synchronization services such as 1Sync, Agentrics, GXS, or Sterling Commerce, or they can be maintained behind an enterprise firewall.
The information is stored in the data pools and linked via the Global Registry to the rest of the community (see Figure 1). Only the data needed to uniquely identify the item and seller (the "thin" data) are stored in and registered with the Global Registry. These data include item code, Global Trade Item Number, product category, owner, and other core item attributes. Globally, this represents very few attributes, but, for many products, this nonetheless translates to a very large database.
Additional data needed to support all current and future business processes (the "fat" data) are synchronized automatically between publishers and subscribers based on the rules related to the subscription (such as frequency and location) and the relationship requirements. Some partners require unique data attributes to do business; others require more standard data elements. In all cases, this data flow (represented by the blue line in the figure) takes place outside of the normal electronic data interchange or business transaction flow (represented by the black line).
Figure 1 shows a conceptual or logical topology of the GDSN with publishers (suppliers), subscribers (retailers/buyers), and their respective (country, regional) data pools synchronizing messages via a (global) product registry.
Currently, the GDSN is still at the foundational level. Further development will be needed before the network can fulfill its promise of helping participants increase agility across the value chain. Through 2009, the IT consultancy Gartner expects to see improvements in agility through more efficient business processes related to new-product introduction and price/promotion collaboration as adoption increases in North America, Europe, and Asia.
Other industries follow suit
Global data-synchronization efforts are not unique to the consumer goods and retail sectors. Other industries are also looking to improve supply chain agility and end-to-end decision making across their value chains. We have seen comparable initiatives emerge in the life science industry, such as the Global Healthcare Exchange, or GHX (www.ghx.com), which is building an industry product catalog. Similar programs have been introduced in the automotive manufacturing and retail sector through the Automotive Aftermarket Industry Association (www.aftermarket.org) and National Automotive Dealers Association (www.nada.org). Likewise, the Coalition for Healthcare eStandards (www.chestandards.org) and the Health Care EBusiness Collaborative (www.hcec.org) are developing standards for data synchronization for medical logistics.
Indeed, the GDS framework is valid across any network of trading members and, therefore, can be extended across any industry. It is a framework that can support any number of stakeholders in an industry where the community desires to improve value chain agility.
The right start: intra-enterprise synchronization
As users embark on their GDS programs, they will realize that they can't share even simple data with trading partners unless the data behind their own firewalls is clean. Master data management, then, is a prerequisite for global data synchronization. If your internal data aren't semantically consistent (for example, if one application represents price in U.S. dollars and another in euros), then you will fail to realize the benefits of data synchronization and will be unable to mitigate the costs of external integration. Furthermore, by feeding your partner bad data, you risk causing significant harm to the relationship.
MDM is almost a mirror image of GDS. If GDS is about semantic reconciliation of master data among enterprises, then MDM is partly about the semantic reconciliation of master data inside the enterprise. Yet MDM is the process by which all forms of master data will be managed, not just the product and commercial data needed for GDS.
Different industries are at different stages of the data management effort. Tier-1 members of the retail and consumer goods industries are already active in data synchronization. Tier 2 and below are also active, although often by category/vertical industry: consumer electronics, hard lines (that is, household goods like garden, kitchen, and household furniture), and so on. In the life science industry, tier-1 members are working with GHX to build their first centralized product data catalog.
The technologies needed to achieve MDM aren't all new, and much effort will be expended in aligning technologies that are already deployed in the enterprise—for example, for product information management and customer data integration. Rationalizing data management across enterprise resource planning, customer relationship management, supply chain management, supplier relationship management, and product lifecycle management systems is only the beginning of the effort. Yet MDM must be addressed if you are to derive any value from GDS. It won't do you any good to learn a new language if you can't then translate the data and information back to your peers.
Technology requirements
Business agility is predicated on the understanding that stakeholders within your enterprise are coordinated. This coordination requires the use of data that is semantically consistent within and across business applications. The more complex (that is, heterogeneous) your IT landscape is, the harder and more costly it will be to achieve that consistency. When agility depends on coordination of business activities across the trading-partner boundary, that complexity requires a new level of attention and IT investment. MDM is the program you need to adopt within the enterprise, and this has to be extended with a GDS program outside your enterprise.
Here are the some of the steps that must be taken to assure that data is synchronized both within the enterprise and with your external partners:
Understand what is needed to support GDS and MDM
Assess your IT projects today to ensure that MDM and GDS requirements are being met, are aligned, and are reconciled
Work with strategic customers and/or suppliers to orchestrate GDS programs and to ensure that these programs exploit the necessary technical and industry standards (such as those laid out by GDSN for retail and consumer products or by the Global Healthcare Exchange for life sciences).
The future of GDS
The extension of master data management to external trading partners will only increase. Gartner predicts that through 2007, 30 percent of Global 1000 enterprises involved in the manufacture, movement, buying, and selling of consumer goods will require external data synchronization with their top 10 trading partners. Furthermore, through 2010, 30 percent of Global 1000 enterprises involved in the manufacture, movement, buying, and selling of non-consumer goods (mining, aerospace and defense, electronics, and chemicals) will participate in external data-synchronization programs with their top 10 trading partners.
An enterprise's agility is significantly inhibited by the struggle to ensure that information can flow seamlessly and continuously across all boundaries. Without that seamless flow of information, business transactions can become bogged down, and enterprises may find themselves needlessly spending time and money to reconcile data discrepancies. MDM overcomes existing limitations and GDS extends this across the value chain; combined, these two programs establish a new discipline for enabling business agility.
This article is printed with permission of Gartner Inc. Copyright 2007.
Benefits for Amazon's customers--who include marketplace retailers and logistics services customers, as well as companies who use its Amazon Web Services (AWS) platform and the e-commerce shoppers who buy goods on the website--will include generative AI (Gen AI) solutions that offer real-world value, the company said.
The launch is based on “Amazon Nova,” the company’s new generation of foundation models, the company said in a blog post. Data scientists use foundation models (FMs) to develop machine learning (ML) platforms more quickly than starting from scratch, allowing them to create artificial intelligence applications capable of performing a wide variety of general tasks, since they were trained on a broad spectrum of generalized data, Amazon says.
The new models are integrated with Amazon Bedrock, a managed service that makes FMs from AI companies and Amazon available for use through a single API. Using Amazon Bedrock, customers can experiment with and evaluate Amazon Nova models, as well as other FMs, to determine the best model for an application.
Calling the launch “the next step in our AI journey,” the company says Amazon Nova has the ability to process text, image, and video as prompts, so customers can use Amazon Nova-powered generative AI applications to understand videos, charts, and documents, or to generate videos and other multimedia content.
“Inside Amazon, we have about 1,000 Gen AI applications in motion, and we’ve had a bird’s-eye view of what application builders are still grappling with,” Rohit Prasad, SVP of Amazon Artificial General Intelligence, said in a release. “Our new Amazon Nova models are intended to help with these challenges for internal and external builders, and provide compelling intelligence and content generation while also delivering meaningful progress on latency, cost-effectiveness, customization, information grounding, and agentic capabilities.”
The new Amazon Nova models available in Amazon Bedrock include:
Amazon Nova Micro, a text-only model that delivers the lowest latency responses at very low cost.
Amazon Nova Lite, a very low-cost multimodal model that is lightning fast for processing image, video, and text inputs.
Amazon Nova Pro, a highly capable multimodal model with the best combination of accuracy, speed, and cost for a wide range of tasks.
Amazon Nova Premier, the most capable of Amazon’s multimodal models for complex reasoning tasks and for use as the best teacher for distilling custom models
Amazon Nova Canvas, a state-of-the-art image generation model.
Amazon Nova Reel, a state-of-the-art video generation model that can transform a single image input into a brief video with the prompt: dolly forward.
Economic activity in the logistics industry expanded in November, continuing a steady growth pattern that began earlier this year and signaling a return to seasonality after several years of fluctuating conditions, according to the latest Logistics Managers’ Index report (LMI), released today.
The November LMI registered 58.4, down slightly from October’s reading of 58.9, which was the highest level in two years. The LMI is a monthly gauge of business conditions across warehousing and logistics markets; a reading above 50 indicates growth and a reading below 50 indicates contraction.
“The overall index has been very consistent in the past three months, with readings of 58.6, 58.9, and 58.4,” LMI analyst Zac Rogers, associate professor of supply chain management at Colorado State University, wrote in the November LMI report. “This plateau is slightly higher than a similar plateau of consistency earlier in the year when May to August saw four readings between 55.3 and 56.4. Seasonally speaking, it is consistent that this later year run of readings would be the highest all year.”
Separately, Rogers said the end-of-year growth reflects the return to a healthy holiday peak, which started when inventory levels expanded in late summer and early fall as retailers began stocking up to meet consumer demand. Pandemic-driven shifts in consumer buying behavior, inflation, and economic uncertainty contributed to volatile peak season conditions over the past four years, with the LMI swinging from record-high growth in late 2020 and 2021 to slower growth in 2022 and contraction in 2023.
“The LMI contracted at this time a year ago, so basically [there was] no peak season,” Rogers said, citing inflation as a drag on demand. “To have a normal November … [really] for the first time in five years, justifies what we’ve seen all these companies doing—building up inventory in a sustainable, seasonal way.
“Based on what we’re seeing, a lot of supply chains called it right and were ready for healthy holiday season, so far.”
The LMI has remained in the mid to high 50s range since January—with the exception of April, when the index dipped to 52.9—signaling strong and consistent demand for warehousing and transportation services.
The LMI is a monthly survey of logistics managers from across the country. It tracks industry growth overall and across eight areas: inventory levels and costs; warehousing capacity, utilization, and prices; and transportation capacity, utilization, and prices. The report is released monthly by researchers from Arizona State University, Colorado State University, Rochester Institute of Technology, Rutgers University, and the University of Nevada, Reno, in conjunction with the Council of Supply Chain Management Professionals (CSCMP).
Specifically, 48% of respondents identified rising tariffs and trade barriers as their top concern, followed by supply chain disruptions at 45% and geopolitical instability at 41%. Moreover, tariffs and trade barriers ranked as the priority issue regardless of company size, as respondents at companies with less than 250 employees, 251-500, 501-1,000, 1,001-50,000 and 50,000+ employees all cited it as the most significant issue they are currently facing.
“Evolving tariffs and trade policies are one of a number of complex issues requiring organizations to build more resilience into their supply chains through compliance, technology and strategic planning,” Jackson Wood, Director, Industry Strategy at Descartes, said in a release. “With the potential for the incoming U.S. administration to impose new and additional tariffs on a wide variety of goods and countries of origin, U.S. importers may need to significantly re-engineer their sourcing strategies to mitigate potentially higher costs.”
Grocers and retailers are struggling to get their systems back online just before the winter holiday peak, following a software hack that hit the supply chain software provider Blue Yonder this week.
The ransomware attack is snarling inventory distribution patterns because of its impact on systems such as the employee scheduling system for coffee stalwart Starbucks, according to a published report. Scottsdale, Arizona-based Blue Yonder provides a wide range of supply chain software, including warehouse management system (WMS), transportation management system (TMS), order management and commerce, network and control tower, returns management, and others.
Blue Yonder today acknowledged the disruptions, saying they were the result of a ransomware incident affecting its managed services hosted environment. The company has established a dedicated cybersecurity incident update webpage to communicate its recovery progress, but it had not been updated for nearly two days as of Tuesday afternoon. “Since learning of the incident, the Blue Yonder team has been working diligently together with external cybersecurity firms to make progress in their recovery process. We have implemented several defensive and forensic protocols,” a Blue Yonder spokesperson said in an email.
The timing of the attack suggests that hackers may have targeted Blue Yonder in a calculated attack based on the upcoming Thanksgiving break, since many U.S. organizations downsize their security staffing on holidays and weekends, according to a statement from Dan Lattimer, VP of Semperis, a New Jersey-based computer and network security firm.
“While details on the specifics of the Blue Yonder attack are scant, it is yet another reminder how damaging supply chain disruptions become when suppliers are taken offline. Kudos to Blue Yonder for dealing with this cyberattack head on but we still don’t know how far reaching the business disruptions will be in the UK, U.S. and other countries,” Lattimer said. “Now is time for organizations to fight back against threat actors. Deciding whether or not to pay a ransom is a personal decision that each company has to make, but paying emboldens threat actors and throws more fuel onto an already burning inferno. Simply, it doesn’t pay-to-pay,” he said.
The incident closely followed an unrelated cybersecurity issue at the grocery giant Ahold Delhaize, which has been recovering from impacts to the Stop & Shop chain that it across the U.S. Northeast region. In a statement apologizing to customers for the inconvenience of the cybersecurity issue, Netherlands-based Ahold Delhaize said its top priority is the security of its customers, associates and partners, and that the company’s internal IT security staff was working with external cybersecurity experts and law enforcement to speed recovery. “Our teams are taking steps to assess and mitigate the issue. This includes taking some systems offline to help protect them. This issue and subsequent mitigating actions have affected certain Ahold Delhaize USA brands and services including a number of pharmacies and certain e-commerce operations,” the company said.
Editor's note:This article was revised on November 27 to indicate that the cybersecurity issue at Ahold Delhaize was unrelated to the Blue Yonder hack.
The new funding brings Amazon's total investment in Anthropic to $8 billion, while maintaining the e-commerce giant’s position as a minority investor, according to Anthropic. The partnership was launched in 2023, when Amazon invested its first $4 billion round in the firm.
Anthropic’s “Claude” family of AI assistant models is available on AWS’s Amazon Bedrock, which is a cloud-based managed service that lets companies build specialized generative AI applications by choosing from an array of foundation models (FMs) developed by AI providers like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon itself.
According to Amazon, tens of thousands of customers, from startups to enterprises and government institutions, are currently running their generative AI workloads using Anthropic’s models in the AWS cloud. Those GenAI tools are powering tasks such as customer service chatbots, coding assistants, translation applications, drug discovery, engineering design, and complex business processes.
"The response from AWS customers who are developing generative AI applications powered by Anthropic in Amazon Bedrock has been remarkable," Matt Garman, AWS CEO, said in a release. "By continuing to deploy Anthropic models in Amazon Bedrock and collaborating with Anthropic on the development of our custom Trainium chips, we’ll keep pushing the boundaries of what customers can achieve with generative AI technologies. We’ve been impressed by Anthropic’s pace of innovation and commitment to responsible development of generative AI, and look forward to deepening our collaboration."