Covering Disruptive Technology Powering Business in The Digital Age

Home > DTA news > Blog > Taking AI to the Next Level Securely
Taking AI to the Next Level Securely
July 17, 2018 Blog

Artificial intelligence (AI) technologies have made an enormous impact throughout the globe be it in rural areas where tech companies are trying to improve living conditions, or in busy metropolises where the same is being applied but at a different capacity. Its potential and possibilities are hard to quantify. From simple tasks such as attending to daily activities like switching on home alarm systems or arranging appointments, to assisting in finding cures for any number of debilitating diseases. In fact, the possibilities are only limited by our creativity in disrupting any problem or industry.

The US military has also benefitted from an AI disruption. By collaborating to use machine learning and AI to make sense of the huge amounts of data collected from footage by drones, they were able to analyse and detect objects of interest easily by assisting the analysts in flagging those areas.

Numerous examples, such as the ones aforementioned, demonstrate that humans and machines are now working in symbiosis in increasingly cost-effective and efficient ways. Humans will need to become better adept at using and configuring AI technologies. As AI continues to grow, develop and get smarter, it will make machines many times more accurate and efficient, thus allowing humans to do much more in less time.

At the same time, the opposite of that coin is the risk that AI brings to the table. Existential threats aside, artificial intelligence technologies could be used and exploited for malicious purposes. A simple but real example is, even now, AI has already been used to create new forms of cybercrime and we’ve seen a massive rise in automated hacking.

Be it from the academic or industry sector, AI has the potential to create imminent danger and as a “dual-use technology” endangering countless lives by hacking into military, nuclear power and explosives silos.

As AI capabilities become stronger, the threat of these potential dangers become even more real, leaving us blind to where the attack is coming from until it is too late.

Here is where researchers need to take heed of the potential misuse of the AI and come up with ways to curb those threats, such as through regulatory frameworks that could help prevent such attacks.

It is clear that AI is developing at an exponential rate, and therefore monitoring and keeping it safe for development has to become a top priority. It has to be used responsibly for a better and secure future.