Artificial Intelligence: Blessing or Curse?

Artificial Intelligence: Blessing or Curse?

Kosiuszko Institute

at Partner of Promotion

Views 710

Artificial Intelligence: Blessing or Curse?

15.10.2019 09:45 am

Captivating for some, horrifying for others – Artificial Intelligence has always been evoking ambivalent feelings. Sure, it’s useful, helpful and it makes our lives so much easier. Yet, at the same time: how can we make sure it will not get out of control? What about the ethical aspect of its exploitation? And finally: what if it falls into the wrong hands and is used to deceive us?

Whether AI fascinates or scares us, there can be no doubt about it: right before our eyes it is reshaping the world (and cyberworld) we know, starting an entirely new era. Worst case scenario takes a gloomy outlook with a spectre of technological singularity and mankind struggling to survive in a society dominated by out-of-control machines. How do we prevent such a turn of events?

Artificial Intelligence is, in a nutshell, a mix of huge quantities of data, algorithms and effective calculations. Hacking the last one isn’t too complicated. Meanwhile, disrupting data and algorithms provides a new challenge for cybercriminals. Successful disturbances of these elements will result in amplified cybercrimes, also with the use of false data injection (FDI) i.e. attacks on smart grids, where compromised sensors forge the events that do not occur.

“FDI may disturb operations of various infrastructures using AI systems. We should expect a whole new kind of attacks, where everyday use products, such as GPS, autonomous vehicles or drones, are turned into potential weapon capable of inflicting actual damage and physically harming people,” comments Izabela Albrycht, Chairperson of the Board, the Kosciuszko Institute, organiser of CYBERSEC 2019, major cybersecurity conference in Europe.

Therefore, decision makers and cybersecurity experts must fully understand the future landscape of threats and how AI will allow hackers to amplify the efficiency of attacks. Thanks to AI, the quality and quantity of attacks will increase, more entities will be able to launch attacks, while well known types of attacks will develop, e.g. phishing will become personalised so that it’ll be difficult to tell whether we are corresponding with a machine or with a real person. AI-supported malware will be able to circumvent security measures, avoid detection and reach their goal. Finally, thanks to multi vector cyberattacks, machines will start launching continuous attacks on all weak points of attacked systems.

Moreover, AI may be used for political and surveillance reasons leading to social manipulation.

“Such activities may undermine democratic rules and make social behaviours more radical, divide societies or even provoke protests. For instance, deepfake technique enables spreading disinformation by forging audio and video files, leading to a situation where people wouldn’t know which information they could trust, or would even believe untrue information,” says Albrycht.

 

 

“Then, there’s individualised propaganda spread with the use of digital shadow. Whatever we do, wherever we are or live, we always leave a trace, and we must remember that AI-based systems collect data, which can be analysed not only by intelligence agencies but also hackers, thus leading to restriction of liberties and privacy around the world,” she continues.

On top of that, widespread use of AI raises concerns regarding its ethical aspects: how can we make sure that the machine will follow our values and rules? How to maintain control over the decision making process? Who will be responsible for accidents, if they occur, e.g. in case of autonomous cars, since there are no drivers? These are just a few of the questions popping to mind in reference to AI and its ethics. How to tackle these challenges? Firstly, the equipment using AI should be developed in a way ensuring strict compliance with the rules of conduct applicable in a given organisation. Every application should have an algorithm which will determine its procedures in order to guarantee that every behaviour or decision made by the machine can be explained and justified. It’s like raising a child – we need to “teach” our machines to act the way we need and expect them to. When it comes to liability for any damage or harm done by AI, general principles are needed. For instance, the German government has adopted provisions according to which if an accident involving an autonomous car is unavoidable, then the vehicle must always choose material damage and save human beings. A human-centric approach of AI is a must, wherever it is used. Thanks to it, AI will provide us with not only useful, but also safe and trustworthy support.

Having all the doubts in mind, do the AI prospects have to be so gloomy? By all means, no. We must never forget that AI was invented for our benefit and is playing its part perfectly in various fields. To name a few, it helps detect cancer, or supports rescue missions – AI-supported machines can reach places in time unthinkable for “traditional” rescue teams, they can enter locations that are too dangerous for humans (e.g. on the verge of collapse) looking for survivors, or transport food or medicine to trapped people. They can also support elderly – e.g. in some retirement homes special humanoid robots are keeping company to elderly, playing music, reminding about such mundane issues as medicines and toilet or even engaging in simple conversations or games.

Moreover, we may expect that AI will also significantly change the character of warfare. Autonomous armaments systems and military robots will most probably soon become major instruments at a new stage of strategic rivalry between countries, significantly reducing the danger posed to the soldiers.

Bearing in mind all the aspects of increasingly widespread AI exploitation, should we start to be concerned about our security in the new technologically saturated environment?

“We must remember that AI in any sphere, be it public, private or military, should be sustainable. It would allow us to limit the danger facing ICT networks and systems and, thus, threatening the entire humanity. Therefore, public sector bears particular responsibility for the employment of AI potential. AI should, first and foremost, be trustworthy,” says Izabela Albrycht.

 

Captivating for some, horrifying for others – Artificial Intelligence has always been evoking ambivalent feelings. Sure, it’s useful, helpful and it makes our lives so much easier. Yet, at the same time: how can we make sure it will not get out of control? What about the ethical aspect of its exploitation? And finally: what if it falls into the wrong hands and is used to deceive us?

Whether AI fascinates or scares us, there can be no doubt about it: right before our eyes it is reshaping the world (and cyberworld) we know, starting an entirely new era. Worst case scenario takes a gloomy outlook with a spectre of technological singularity and mankind struggling to survive in a society dominated by out-of-control machines. How do we prevent such a turn of events?

Artificial Intelligence is, in a nutshell, a mix of huge quantities of data, algorithms and effective calculations. Hacking the last one isn’t too complicated. Meanwhile, disrupting data and algorithms provides a new challenge for cybercriminals. Successful disturbances of these elements will result in amplified cybercrimes, also with the use of false data injection (FDI) i.e. attacks on smart grids, where compromised sensors forge the events that do not occur.

“FDI may disturb operations of various infrastructures using AI systems. We should expect a whole new kind of attacks, where everyday use products, such as GPS, autonomous vehicles or drones, are turned into potential weapon capable of inflicting actual damage and physically harming people,” comments Izabela Albrycht, Chairperson of the Board, the Kosciuszko Institute, organiser of CYBERSEC 2019, major cybersecurity conference in Europe.

Therefore, decision makers and cybersecurity experts must fully understand the future landscape of threats and how AI will allow hackers to amplify the efficiency of attacks. Thanks to AI, the quality and quantity of attacks will increase, more entities will be able to launch attacks, while well known types of attacks will develop, e.g. phishing will become personalised so that it’ll be difficult to tell whether we are corresponding with a machine or with a real person. AI-supported malware will be able to circumvent security measures, avoid detection and reach their goal. Finally, thanks to multi vector cyberattacks, machines will start launching continuous attacks on all weak points of attacked systems.

Moreover, AI may be used for political and surveillance reasons leading to social manipulation.

“Such activities may undermine democratic rules and make social behaviours more radical, divide societies or even provoke protests. For instance, deepfake technique enables spreading disinformation by forging audio and video files, leading to a situation where people wouldn’t know which information they could trust, or would even believe untrue information,” says Albrycht.

 

 

“Then, there’s individualised propaganda spread with the use of digital shadow. Whatever we do, wherever we are or live, we always leave a trace, and we must remember that AI-based systems collect data, which can be analysed not only by intelligence agencies but also hackers, thus leading to restriction of liberties and privacy around the world,” she continues.

On top of that, widespread use of AI raises concerns regarding its ethical aspects: how can we make sure that the machine will follow our values and rules? How to maintain control over the decision making process? Who will be responsible for accidents, if they occur, e.g. in case of autonomous cars, since there are no drivers? These are just a few of the questions popping to mind in reference to AI and its ethics. How to tackle these challenges? Firstly, the equipment using AI should be developed in a way ensuring strict compliance with the rules of conduct applicable in a given organisation. Every application should have an algorithm which will determine its procedures in order to guarantee that every behaviour or decision made by the machine can be explained and justified. It’s like raising a child – we need to “teach” our machines to act the way we need and expect them to. When it comes to liability for any damage or harm done by AI, general principles are needed. For instance, the German government has adopted provisions according to which if an accident involving an autonomous car is unavoidable, then the vehicle must always choose material damage and save human beings. A human-centric approach of AI is a must, wherever it is used. Thanks to it, AI will provide us with not only useful, but also safe and trustworthy support.

Having all the doubts in mind, do the AI prospects have to be so gloomy? By all means, no. We must never forget that AI was invented for our benefit and is playing its part perfectly in various fields. To name a few, it helps detect cancer, or supports rescue missions – AI-supported machines can reach places in time unthinkable for “traditional” rescue teams, they can enter locations that are too dangerous for humans (e.g. on the verge of collapse) looking for survivors, or transport food or medicine to trapped people. They can also support elderly – e.g. in some retirement homes special humanoid robots are keeping company to elderly, playing music, reminding about such mundane issues as medicines and toilet or even engaging in simple conversations or games.

Moreover, we may expect that AI will also significantly change the character of warfare. Autonomous armaments systems and military robots will most probably soon become major instruments at a new stage of strategic rivalry between countries, significantly reducing the danger posed to the soldiers.

Bearing in mind all the aspects of increasingly widespread AI exploitation, should we start to be concerned about our security in the new technologically saturated environment?

“We must remember that AI in any sphere, be it public, private or military, should be sustainable. It would allow us to limit the danger facing ICT networks and systems and, thus, threatening the entire humanity. Therefore, public sector bears particular responsibility for the employment of AI potential. AI should, first and foremost, be trustworthy,” says Izabela Albrycht.

 

António Guterres, Secretary-General of the United Nations, reached the heart of the problem when he said in November 2018 that “technology should empower not overpower us”. This statement should become a guiding leitmotif for each and every aspect of cybersecurity in the AI era.

António Guterres, Secretary-General of the United Nations, reached the heart of the problem when he said in November 2018 that “technology should empower not overpower us”. This statement should become a guiding leitmotif for each and every aspect of cybersecurity in the AI era.

Latest blogs

Simon Black Awaken Intelligence

Boom or Bust: How the Financial Services Sector is Coping

Covid-19 has had an impact across all industries and businesses are feeling the sting. However, is it equally devastating within every sector? As industry and individual concerns grow during the inevitable economic crisis, financial services are Read more »

n/a n/a

Tips on How to Successfully Trade CFDs

A CFD or contract for difference is a financial product that allows a trader to speculate on asset classed without owning a portion of the underlying asset. A CFD trade is not an investment but high-risk speculation that carries the risk of losing Read more »

Patrick McKinney and Joe Fuchs Wolters Kluwer Finance, Risk & Regulatory Reporting

Building an Integrated Data Management System: A Guide for Digital Banks

Digital banks and other FinTechs are emerging as more nimble competitors to established legacy banks. The digital banks that are on their way to becoming fully chartered have the opportunity to setup fully automated processes and systems without Read more »

n/a n/a

How COVID-19 Is Ushering In a New Era of Cashless Technology

  Image source: https://www.pexels.com/photo/person-shopping-online-3944405/   Cashless technology isn't a completely fresh concept. People have been using credit cards for decades, and the market for fintech services has been Read more »

Jean Shin tyntec

Using WhatsApp for 2FA is the Future of Banking

From user authentication and password resets to transaction verification, two-factor authentication (2FA) offers basic but useful protection for consumers. The 2FA process typically sends an SMS sent to the customer with a one-time password (OTP). Read more »

Related Blogs

Barley Laing Melissa

Where AI Offers Real Value

It’s easy to get caught up in all the excitement with artificial intelligence (AI) and lose sight of the reality of what it can deliver. This is because debate usually focuses on the ambitious uses of AI technology, rather than what it can Read more »

Henry Vaage Iversen Boost AI

Conversational AI and The Future of Financial Advice

Co-authored with Ramtin Matin, Lead Technological Strategist, SpareBank 1 SR-Bank   Artificial intelligence is one of the key components behind the digital revolution in the banking and finance sector. From fraud protection to customer service, Read more »

Sanjay Mathew Oracle

The Future of Financial Services: 3 Technologies Poised to Change Your Experience

If you think about it, financial services have traditionally been disconnected from their customers’ goals. For instance, when we want to buy a vehicle or house, or purchase inventory for a manufacturing process, we typically begin by researching Read more »

Steve Morgan Pegasystems

Intelligent Automation Is the Customer Service Differentiator For Banks

Artificial Intelligence, Robotic Process Automation, Intelligent Automation, Blockchain, Tokenisation, Machine Learning - banks are struggling to work out which technologies are most suitable for their needs and which they should prioritise in order Read more »

Olivia Marcey BlackLine

Artificial Intelligence: What Tasks Can Self-Learning Systems Really Perform For Finance?

Go to wherever you listen to your podcasts. Type in “artificial intelligence.” Fall down the rabbit hole. There’s a ton of hype around AI—some of it sensationalist, some of it dark, and some of it downright terrifying. But, while podcasters, Read more »

Magazine
ALL
Free Newsletter Sign-up
+44 (0) 208 819 32 53 +44 (0) 173 261 71 47
Download Our Mobile App
Financial It Youtube channel