Artificial Intelligence: Blessing or Curse?
![Artificial Intelligence: Blessing or Curse? Artificial Intelligence: Blessing or Curse?](https://financialit.net/sites/default/files/ik-ang-250x250.png)
- Kosiuszko Institute, at Partner of Promotion
- 15.10.2019 09:45 am #ArtificialIntelligence
Captivating for some, horrifying for others – Artificial Intelligence has always been evoking ambivalent feelings. Sure, it’s useful, helpful and it makes our lives so much easier. Yet, at the same time: how can we make sure it will not get out of control? What about the ethical aspect of its exploitation? And finally: what if it falls into the wrong hands and is used to deceive us?
Whether AI fascinates or scares us, there can be no doubt about it: right before our eyes it is reshaping the world (and cyberworld) we know, starting an entirely new era. Worst case scenario takes a gloomy outlook with a spectre of technological singularity and mankind struggling to survive in a society dominated by out-of-control machines. How do we prevent such a turn of events?
Artificial Intelligence is, in a nutshell, a mix of huge quantities of data, algorithms and effective calculations. Hacking the last one isn’t too complicated. Meanwhile, disrupting data and algorithms provides a new challenge for cybercriminals. Successful disturbances of these elements will result in amplified cybercrimes, also with the use of false data injection (FDI) i.e. attacks on smart grids, where compromised sensors forge the events that do not occur.
“FDI may disturb operations of various infrastructures using AI systems. We should expect a whole new kind of attacks, where everyday use products, such as GPS, autonomous vehicles or drones, are turned into potential weapon capable of inflicting actual damage and physically harming people,” comments Izabela Albrycht, Chairperson of the Board, the Kosciuszko Institute, organiser of CYBERSEC 2019, major cybersecurity conference in Europe.
Therefore, decision makers and cybersecurity experts must fully understand the future landscape of threats and how AI will allow hackers to amplify the efficiency of attacks. Thanks to AI, the quality and quantity of attacks will increase, more entities will be able to launch attacks, while well known types of attacks will develop, e.g. phishing will become personalised so that it’ll be difficult to tell whether we are corresponding with a machine or with a real person. AI-supported malware will be able to circumvent security measures, avoid detection and reach their goal. Finally, thanks to multi vector cyberattacks, machines will start launching continuous attacks on all weak points of attacked systems.
Moreover, AI may be used for political and surveillance reasons leading to social manipulation.
“Such activities may undermine democratic rules and make social behaviours more radical, divide societies or even provoke protests. For instance, deepfake technique enables spreading disinformation by forging audio and video files, leading to a situation where people wouldn’t know which information they could trust, or would even believe untrue information,” says Albrycht.
“Then, there’s individualised propaganda spread with the use of digital shadow. Whatever we do, wherever we are or live, we always leave a trace, and we must remember that AI-based systems collect data, which can be analysed not only by intelligence agencies but also hackers, thus leading to restriction of liberties and privacy around the world,” she continues.
On top of that, widespread use of AI raises concerns regarding its ethical aspects: how can we make sure that the machine will follow our values and rules? How to maintain control over the decision making process? Who will be responsible for accidents, if they occur, e.g. in case of autonomous cars, since there are no drivers? These are just a few of the questions popping to mind in reference to AI and its ethics. How to tackle these challenges? Firstly, the equipment using AI should be developed in a way ensuring strict compliance with the rules of conduct applicable in a given organisation. Every application should have an algorithm which will determine its procedures in order to guarantee that every behaviour or decision made by the machine can be explained and justified. It’s like raising a child – we need to “teach” our machines to act the way we need and expect them to. When it comes to liability for any damage or harm done by AI, general principles are needed. For instance, the German government has adopted provisions according to which if an accident involving an autonomous car is unavoidable, then the vehicle must always choose material damage and save human beings. A human-centric approach of AI is a must, wherever it is used. Thanks to it, AI will provide us with not only useful, but also safe and trustworthy support.
Having all the doubts in mind, do the AI prospects have to be so gloomy? By all means, no. We must never forget that AI was invented for our benefit and is playing its part perfectly in various fields. To name a few, it helps detect cancer, or supports rescue missions – AI-supported machines can reach places in time unthinkable for “traditional” rescue teams, they can enter locations that are too dangerous for humans (e.g. on the verge of collapse) looking for survivors, or transport food or medicine to trapped people. They can also support elderly – e.g. in some retirement homes special humanoid robots are keeping company to elderly, playing music, reminding about such mundane issues as medicines and toilet or even engaging in simple conversations or games.
Moreover, we may expect that AI will also significantly change the character of warfare. Autonomous armaments systems and military robots will most probably soon become major instruments at a new stage of strategic rivalry between countries, significantly reducing the danger posed to the soldiers.
Bearing in mind all the aspects of increasingly widespread AI exploitation, should we start to be concerned about our security in the new technologically saturated environment?
“We must remember that AI in any sphere, be it public, private or military, should be sustainable. It would allow us to limit the danger facing ICT networks and systems and, thus, threatening the entire humanity. Therefore, public sector bears particular responsibility for the employment of AI potential. AI should, first and foremost, be trustworthy,” says Izabela Albrycht.
Captivating for some, horrifying for others – Artificial Intelligence has always been evoking ambivalent feelings. Sure, it’s useful, helpful and it makes our lives so much easier. Yet, at the same time: how can we make sure it will not get out of control? What about the ethical aspect of its exploitation? And finally: what if it falls into the wrong hands and is used to deceive us?
Whether AI fascinates or scares us, there can be no doubt about it: right before our eyes it is reshaping the world (and cyberworld) we know, starting an entirely new era. Worst case scenario takes a gloomy outlook with a spectre of technological singularity and mankind struggling to survive in a society dominated by out-of-control machines. How do we prevent such a turn of events?
Artificial Intelligence is, in a nutshell, a mix of huge quantities of data, algorithms and effective calculations. Hacking the last one isn’t too complicated. Meanwhile, disrupting data and algorithms provides a new challenge for cybercriminals. Successful disturbances of these elements will result in amplified cybercrimes, also with the use of false data injection (FDI) i.e. attacks on smart grids, where compromised sensors forge the events that do not occur.
“FDI may disturb operations of various infrastructures using AI systems. We should expect a whole new kind of attacks, where everyday use products, such as GPS, autonomous vehicles or drones, are turned into potential weapon capable of inflicting actual damage and physically harming people,” comments Izabela Albrycht, Chairperson of the Board, the Kosciuszko Institute, organiser of CYBERSEC 2019, major cybersecurity conference in Europe.
Therefore, decision makers and cybersecurity experts must fully understand the future landscape of threats and how AI will allow hackers to amplify the efficiency of attacks. Thanks to AI, the quality and quantity of attacks will increase, more entities will be able to launch attacks, while well known types of attacks will develop, e.g. phishing will become personalised so that it’ll be difficult to tell whether we are corresponding with a machine or with a real person. AI-supported malware will be able to circumvent security measures, avoid detection and reach their goal. Finally, thanks to multi vector cyberattacks, machines will start launching continuous attacks on all weak points of attacked systems.
Moreover, AI may be used for political and surveillance reasons leading to social manipulation.
“Such activities may undermine democratic rules and make social behaviours more radical, divide societies or even provoke protests. For instance, deepfake technique enables spreading disinformation by forging audio and video files, leading to a situation where people wouldn’t know which information they could trust, or would even believe untrue information,” says Albrycht.
“Then, there’s individualised propaganda spread with the use of digital shadow. Whatever we do, wherever we are or live, we always leave a trace, and we must remember that AI-based systems collect data, which can be analysed not only by intelligence agencies but also hackers, thus leading to restriction of liberties and privacy around the world,” she continues.
On top of that, widespread use of AI raises concerns regarding its ethical aspects: how can we make sure that the machine will follow our values and rules? How to maintain control over the decision making process? Who will be responsible for accidents, if they occur, e.g. in case of autonomous cars, since there are no drivers? These are just a few of the questions popping to mind in reference to AI and its ethics. How to tackle these challenges? Firstly, the equipment using AI should be developed in a way ensuring strict compliance with the rules of conduct applicable in a given organisation. Every application should have an algorithm which will determine its procedures in order to guarantee that every behaviour or decision made by the machine can be explained and justified. It’s like raising a child – we need to “teach” our machines to act the way we need and expect them to. When it comes to liability for any damage or harm done by AI, general principles are needed. For instance, the German government has adopted provisions according to which if an accident involving an autonomous car is unavoidable, then the vehicle must always choose material damage and save human beings. A human-centric approach of AI is a must, wherever it is used. Thanks to it, AI will provide us with not only useful, but also safe and trustworthy support.
Having all the doubts in mind, do the AI prospects have to be so gloomy? By all means, no. We must never forget that AI was invented for our benefit and is playing its part perfectly in various fields. To name a few, it helps detect cancer, or supports rescue missions – AI-supported machines can reach places in time unthinkable for “traditional” rescue teams, they can enter locations that are too dangerous for humans (e.g. on the verge of collapse) looking for survivors, or transport food or medicine to trapped people. They can also support elderly – e.g. in some retirement homes special humanoid robots are keeping company to elderly, playing music, reminding about such mundane issues as medicines and toilet or even engaging in simple conversations or games.
Moreover, we may expect that AI will also significantly change the character of warfare. Autonomous armaments systems and military robots will most probably soon become major instruments at a new stage of strategic rivalry between countries, significantly reducing the danger posed to the soldiers.
Bearing in mind all the aspects of increasingly widespread AI exploitation, should we start to be concerned about our security in the new technologically saturated environment?
“We must remember that AI in any sphere, be it public, private or military, should be sustainable. It would allow us to limit the danger facing ICT networks and systems and, thus, threatening the entire humanity. Therefore, public sector bears particular responsibility for the employment of AI potential. AI should, first and foremost, be trustworthy,” says Izabela Albrycht.
António Guterres, Secretary-General of the United Nations, reached the heart of the problem when he said in November 2018 that “technology should empower not overpower us”. This statement should become a guiding leitmotif for each and every aspect of cybersecurity in the AI era.
António Guterres, Secretary-General of the United Nations, reached the heart of the problem when he said in November 2018 that “technology should empower not overpower us”. This statement should become a guiding leitmotif for each and every aspect of cybersecurity in the AI era.