½ûÂþÌìÌÃ

Contact Us
Search Icon

Suggested region and language based on your location

    Your current region and language

     Businessman talking to an employee about a development project
    • Blog
      Digital Trust

    Unlocking trust in AI

    BSI's 'Trust in AI' survey reveals global attitudes toward AI across nine countries, including Australia, China, the UK, and the US.

    BSI’s recent “Trust in AI” survey of more than 10,000 adults from nine countries (including Australia, China, the UK, and the US) sheds light on current attitudes toward artificial intelligence (AI).

    AI is no longer a concept from science fiction. It's becoming an everyday reality with the potential to transform our world and be a force for good across society. However, to truly flourish, it needs the confidence of the public.

    BSI’s survey explores the opportunities AI offers to shape a better future and accelerate progress toward a sustainable world. Let’s explore a few findings from the survey:

    How would you like to see AI shaping our future by 2030?

    79 percent believe AI is going to help with carbon reductions; however, they are sceptical.

    AI is poised to become an integral tool for companies to efficiently determine the carbon footprints of products by enabling the swift measurement and analysis of greenhouse gas emissions. However, the public needs to be convinced that this technology is truly effective in order to reap the environmental benefits.

    BSI’s survey asked participants to assess the level of trust necessary for the utilization of AI in different crucial sectors. The results found that 74 percent indicated "trust was needed" for AI in medical diagnosis and treatment, and 75 percent expressed the same for AI in food manufacturing, which includes tasks like ordering and categorizing food based on use-by dates.

    This was replicated across all industries, with between 72 percent and 79 percent of consumers agreeing trust is required for AI to be effectively used in everything from cybersecurity to construction and financial transactions.

    38 percent say their job currently uses some form of AI daily.
    Some countries are already further ahead with AI usage. China (70 percent) and India (64 percent) lead the way in daily usage of AI at work, while respondents in Australia (23 percent) and the UK (29 percent) use it the least. In the US, 37 percent say they currently use AI at work, while 46 percent do not—and about 17 percent are unsure. Among those globally who currently do not use AI at work, nearly half were unsure if their workplace will adopt it by 2030.

    Uncertainty around the uptake of AI exists at the organizational level, especially within cybersecurity and digital risk policies. For some organizations AI may be relatively new, and so have decided to prohibit the use of such tools on corporate systems to mitigate perceived threats. In some cases, employees have found ways of bypassing these controls by using home computers to complete their work.

    Providing training to build an understanding of how AI can be used at work is critical—something data suggests is not commonplace at present. Helpfully, 55 percent of respondents agree we should be training young people how to work in an AI-powered world.

    61 percent want international guidelines to enable the safe use of AI.
    Guidelines around the safe and responsible use are in demand to further address trust in and awareness of AI technology. International standards, such as the forthcoming (ISO 42001) and regulations play a role here.

    Governments and regulatory agencies are also beginning to tackle the issue. For instance, President Biden recently issued the , outlining the US government’s approach to "governing the development and use of AI safely and responsibly."

    Similarly, the goal is to provide a "legal framework that aims to significantly bolster regulations on the development and use of artificial intelligence." Other nations followed suit, as evidenced at the .

    The current lack of understanding and unsteady trust in AI presents a major obstacle - potentially limiting its adoption and benefits. To build confidence and trust in the technology, organizations can establish guardrails governing ethical AI use. (Read Ethical considerations of AI in healthcare by Shusma Balaji, Data Scientist, BSI.)

    We've seen the pitfalls of prior tech revolutions that overlooked public trust, such as the dot-com boom and the rise of social media. As the AI era dawns, equipping people with the proper tools and knowledge is essential to realize AI's potential while avoiding past mistakes.

    This article was originally published by Forbes under the title: on December 15, 2023. Join Mark Brown and Digital Trust colleagues as they present at SASIG’s on February 13, 2024.

    Insights & Media

    Stay up to date on the latest developments and issues around AI

    Get Insights & Media