AI has still to earn its spurs as a reliable tool in the locker of business. So what guidelines are available to executives to aid in determining the appropriate level of trust to apply towards AI systems? To date, different jurisdictions’ regulatory frameworks vary in their approach.
As artificial intelligence becomes more embedded across sectors, governments respond with new regulations to manage risks while enabling innovation. However, complex and fragmented regulatory approaches could undermine trust in AI among key decision-makers. This analysis compares emerging AI governance laws and regulations in the EU, US, and UK, specifically examining their potential impact on trust for executives, managers, and workers adopting AI systems.
The EU’s AI Act categorises risks and sets rules to protect rights while enabling innovation. The US has an AI Bill of Rights and order for safe AI, but no comprehensive laws yet. The UK takes a pro-innovation approach with guidelines for responsible AI use overseen by existing regulators.
EU: The EU Act promotes accountability and transparency in AI. This can help executives trust AI more through audits to check processes. Managers have duties around monitoring systems to ensure progress and compliance. Restrictions on problematic AI protect workers while allowing innovation, although some uses could still undermine rights.
US: Over 90 per cent of AI executives say that AI improves their decision-making confidence, but others lag. Academic research shows that ethics shape trust in AI. Companies would use AI more with guidelines for fairness, explainability, and privacy. However, common values across industries do not yet exist.
UK: In the UK, the rules want companies to feel good about using AI if it’s open on how it works and is fair. However, there are lots of complicated regulations between industries, which is confusing. This may stop executives from adopting AI. There are worries about the economic impact, too. Pros and cons mean boosting or preventing AI trust for executives.
A deeper analysis and interpretation of the different laws and regulations among the countries and continents needs to focus on the pros and cons of executives’ trust in AI for adopting it.
The EU prioritises responsibility through limited innovation. The US enables unfettered development yet breeds uncertainty. The UK is confused, with complex, sparse oversight. Striking the optimal equilibrium to sustain trust requires measured governance for principled AI expansion, not drastic swings between overbearing restrictions versus little accountability, which deter adoption. As priorities diverge locally, executives must weigh their context amidst competing aims. Steps upholding ethical standards, welfare, and technological advancement stand the best chance of motivating cross-regional public investment and leadership buy-in.
EU executives must take additional actions to build trust in AI beyond regulations due to two considerations:
The EU AI Act establishes accountability and restrictions to manage risks. However, achieving genuine adoption and confidence from executives requires further cultural leadership and commitment to ethical AI.
While regulations provide an oversight framework, progress depends on executives driving change through active capability building, risk management, and internal governance. Going beyond rules to instil ethical AI across operations builds authentic trust and acceleration.
A piece of recent positive news about Biden’s AI Bill of Rights is related to the initial pilot applications from the research done by NAIRR (National Artificial Intelligence Research Resource) to make AI tools and resources more available, secure, compatible, and accessible for everyone to learn.1 This can be seen as an additional support for management.
Top executives already trust AI to improve decisions. The problem is with the employees’ trust. Most staff lack confidence in the technology’s fairness and transparency. Without shared ethical guardrails in place across sectors, uncertainty persists.
Managers must translate high-level AI principles into understandable workplace policies and training. Openly addressing concerns about bias and job loss rather than ignoring them expands trust in AI. Cross-industry collaboration to align core values, cementing transparency and accountability, can give employees confidence that AI will be applied ethically.
he UK’s rules aim to make companies confident in using AI by promoting transparency, accountability, and other trust-building principles. However, regulatory complexity across sectors could reduce this confidence. There are also concerns about economic impacts. On the other hand, the generative AI framework for HM Government,2 even if directed to the public sector, supplies an additional point of reference for supporting businesses in topics related to the adoption and implementation from make or buy, to ethics, data and privacy.
The UK wants people to feel good about using artificial intelligence by being open about how it works and ensuring that it’s fair. But lots of complicated rules between industries are confusing. This could make executives afraid to adopt AI. While the goals are good, too much red tape and most rewards going to companies that use AI early on may slow things down.
Thus, managers need to ask for simpler regulations in their field. Checking how the technology will impact workers and being honest about it counters fears. And taking the lead to spread AI’s money-saving powers more evenly brings everyone along. Removing obstacles this way helps get wholehearted buy-in across British businesses.
We discussed some of the new rules that different governments make for using artificial intelligence responsibly. These rules help make company leaders feel OK about putting the technology to use. However, while regulations set intentions, putting principles into practice presents challenges. Having explored high-level policy impacts, we now transition to additional considerations for responsible AI adoption.