AI Adoption Raises Stakes for Data Security
Transportation Firms Should Prioritize Data Governance and System Access as They Add Artificial Intelligence Tools, Industry Experts Say
Key Takeaways:
- Industry leaders said expanding use of AI agents in logistics heightens risks around data security, governance and access that must be addressed before deployment.
- Executives warned that poor data quality, unclear ownership and vulnerable interfaces can undermine model reliability and create openings for unauthorized or malicious actions.
- Experts said companies need stronger oversight, vendor due diligence and employee training to ensure safe data handling and to manage the limits and fallibility of AI systems.
[Stay on top of transportation news: .]
As AI agents begin sending emails, processing orders and making pricing recommendations, the importance of data security, governance and system access continues to grow. Industry leaders said those issues should be addressed from the start.
AI promises significant workflow gains, but it also widens the pathways through which sensitive information travels.
“For agentic tools to work, organizations have to open up emails, calls and voicemails,” said Eric Rempel, chief innovation officer for . “That opens up data provenance and governance, and there is a security aspect.”
Jonah McIntire, chief product and technology officer at Trimble, said AI agents must be protected from unauthorized prompting.
“If you’ve got an agent that can receive emails, it’s like a home with an unlocked door,” he said. “Anything can be thrown at it, and the more the world knows what that agent can do, the more will be thrown at it.”
As AI’s role grows, access management and oversight need to increase as well.

ѳԳپ
“Someone with malicious intent could ask an agent to do something nefarious,” McIntire said. “You could ask an agent like this to go and find every percentage … and just increase it by 5 points. You would have incredible corruption.”
Before fleets or logistics companies even introduce AI into their operations, they need clean, consistent, trustworthy data as the foundation.
Levi Sorenson, AI strategy lead for technology and compliance firm Fleetworthy, said most fleets underestimate the magnitude of that challenge.
“There’s so much data, and it’s not all good data,” he said. “You need a whole process to clean it and promote it to known good datasets so it can be used across the organization.”
Fleetworthy uses tiered datasets internally and will pause projects if a customer’s data is too old or inconsistent to support reliable modeling.

Deep Dive
►Putting AI to Work in Trucking
►AI Adoption Raises Stakes for Data Security
►AI Rollouts Depend on People as Much as Technology
“Hygiene isn’t magic — it’s math,” Sorenson said.
Marc El Khoury, CEO of tech-driven trucking company , said load data often arrives incomplete or contradictory, making modeling a challenge.
“We’ve built checks and balances to validate information. Does it make sense? Does it align with what we know? Is the rate correct? It’s really hard to build good models when the source data isn’t usable.”
El Khoury said many of the AI failures he observed before founding Aifleet weren’t due to the models themselves but to core operational data that was too inconsistent to support reliable modeling.
He added that a significant governance risk comes from software systems that assume operations go according to plan, which often doesn’t happen in transportation.
“Many algorithms ignore edge cases,” El Khoury said, adding that any AI system used in trucking must “change the decisions continuously” to avoid locking fleets into unsafe or unrealistic assumptions.
Even when systems are secure, reliability can break down if the underlying data is incomplete or siloed.
To support interoperability across the industry, the is working on common application programming interfaces so that carriers, shippers, 3PLs and TMS providers can exchange data securely and consistently, said Joe Ohr, NMFTA’s chief operating officer.
Data safety depends not only on what companies do internally but also on their vendors.
Keith Peterson, NMFTA’s vice president of operations, said buyers of AI tools must know how their data is stored, whether it is isolated or commingled with other customers’ information, and if it is used to train models.
“Is the data going to be publicly available or is it private?” he asked.
Plus, data ownership needs to be clear.Vendor credibility is part of that equation.
Ahmed Ebrahim, vice president of strategic alliances at McLeod Software, said third-party AI providers often require deep access to customer-specific datasets, which increases the importance of well-defined roles and responsibilities for data handling and model behavior.

“It is amazing the level of depth needed to take that particular customer’s data and train the AI model for their environment,” he said.
Because data governance is about people as much as it is about data, employees should be trained on what can and can’t be shared, Fleetworthy’s Sorenson said.
“You want to make sure someone isn’t accidentally leaking customer lists or other things you don’t want to get out there,” he explained.
However, even the best-governed systems will make mistakes, and McLeod CEO Tom McLeod said users need to remain cautious. “Don’t assume AI is infallible. It’s important to know what it can and can’t do,” he said.
Want more news? Listen to today's daily briefing belowor go here for more info:
