Can We Trust Artificial Intelligence?

AI, web, network, internet

Published: 10 November 2022

Reading time: 4 minutes

Experts emphasize that artificial intelligence technology itself is neither good nor bad in a moral sense, but its uses can lead to both positive and negative outcomes.

To trust a technology, you need evidence that it works in all kinds of conditions, and that it is accurate.  This kind of guarantee is hard to provide for something like a self-driving car because roads are full of people and obstacles whose behavior may be difficult to predict. Ensuring the AI system's responses and "decisions" are safe in any given situation is complex. 
Trust goes hand in hand with transparency.

How can we make AI trustworthy?
While perfect trustworthiness in the view of all users is not a realistic goal, researchers and others have identified some ways we can make AI more trustworthy. We have to be patient, learn from mistakes, fix things, and not overreact when something goes wrong, This is also a way Thales uses.
It is commited to a  human-centred approach to digital technologies. The focus is on three priorities: 
- helping to make the world safer and more secure by increasing the safety and security of the solutions, 
- using digital technologies to help build a more environmentally responsible world and placing humans at the centre of digital technologies 
- helping to build a more inclusive, more equitable world.

To be successful in those priorities, Thales made 10 commitments on digital trust and responsibility:

1. Keeping humans in control of artificial intelligence
Systems powered by artificial intelligence are capable of operating autonomously. Thales undertakes to start from the premise that human beings must conserve the capacity to assume control over these systems, based on the use cases established with the customer. Artificial intelligence should enhance people’s ability to make decisions, not to replace human beings.

Designing explainable artificial intelligence systems
Some artificial intelligence systems operate with little or no clarity as to the process by which inputs are converted into outputs. This “black box” phenomenon can erode users’ trust in these technologies. Thales undertakes to explain the rules by which the algorithms operate and to provide details of the design of the technologies themselves, to the extent possible under the rules governing data confidentiality and protection of sensitive information. 

3. Adopting a privacy-by-design approach
As new threats emerge, from the rapid replication of malfunctions to concerns around the sharing of sensitive information, Thales undertakes to apply the principles of privacy- and cybersecurity-by-design in the development of its systems and solutions. The Group constantly strives to optimise the types and amounts of data needed to achieve the desired outcome. 

4. Striving to make Thales’s solutions as secure and resilient as possible
Cybercrime remains an ever-present danger, and the only way to guard against this threat is to plan ahead and implement appropriate protections.
Thales undertakes to use its expertise, coupled with its innovation capabilities, to develop solutions that make society more digitally secure – now and in the future.

5. Harnessing the power of digital technology to tackle climate change 
Thales undertakes to support innovations that help reduce natural resource and energy use and cut greenhouse gas emissions.

6. Adopting a frugal approach to data
By 2025, global data creation is projected to grow to more than 180 zettabyes. Such a huge volume of data poses a whole range of problems, starting with the energy needed to store and process it.
When developing its digital systems, Thales strives to be reasoned and proportionate in the production and use of data. The Group prioritises smart data over big data and data quality over data quantity.

7. Making eco-design the norm 
Thales is committed to shrinking the environmental footprint of its products.

8. Tackling discriminatory bias in digital technologies 
Algorithm design and training data can introduce involuntary bias into artificial intelligence systems which, for example, have been seen to discriminate against certain population groups. From the earliest design phase, Thales undertakes to put in place processes to detect bias in its artificial intelligence systems.

9. Promoting inclusion through digital technologies 
According to the United Nations, more than one-third of the world’s population is still offline, while close to one billion people globally do not possess proof of legal identity.
Thales undertakes to use its knowledge and expertise to bring digital inclusion to disadvantaged communities, both through its products – such as digital identification systems and telecoms satellites – and through its employee engagement initiatives.

10. Helping employees navigate the digital age 
To unlock the full potential of the digital transformation, people need to understand how new technologies work so they can make intelligent use of the tools at their disposal.
Thales undertakes to build a community of informed users by providing digital training to all of its employees.

This digital age is far from complete and, with so much change still to come, people are rightly worried about what tomorrow has in store.

We invite you to join us at Thales Conference, in Tuesday, November 15th. Please find out more and register here.

To make this website run properly and to improve your experience, we use cookies. For more detailed information, please check our Cookie Policy.

  • Necessary cookies enable core functionality. The website cannot function properly without these cookies, and can only be disabled by changing your browser preferences.