March 25, 2023 7:22 am

Envision if you will, sometime in the distant future, that even though you are casually walking down a busy higher street in downtown Kuala Lumpur you are abruptly brought to the interest of a crowd of people today closely huddled with each other – enveloped by intense and thunderous chatter – prior to a tv screen situated in a shop window.

A broadcast is streaming, you observe what seems to be the prime minister publically declaring his abrupt resignation from workplace at a press conference in Putrajaya, citing his inability to cope with the stress of government service, desiring to go into permanent retirement.

Feelings are higher and the crowd disperses in rage, leaving the scene instantly – shouting obscenities and vulgarities – even though you stand gobsmacked.

You frantically return house to your computer system, wanting to uncover the causes for the prime minister’s choice when you realise anything rather odd.

The prime minister produced no such statement, he was nevertheless quite a lot abroad attending an international summit.

You come to come across that his voice and stature had been accurately co-opted by deep fake technologies, the fraudulent reside stream was developed by malicious parties – as portion of a political ploy – conspiring to tarnish the prime minister’s reputation, bring about mass confusion and instigate social unrest in Malaysia.

You are left in total disbelief, deceived by close to-genuine footage of a telecast that merely under no circumstances took location.

This grim reality is not also far from us provided the steady advancements of this technologies.

The authorities ought to urgently appear into the concern of deep fakes and how their prospective weaponisation could threaten national safety and the welfare of Malaysians.

Deep fakes have only not too long ago been introduced into the cultural lexicon, gaining notoriety a couple of years ago.

The explanation for the nature of the term itself is attributed to the reality that this technologies comprises artificial intelligence software program that undergoes the method of “deep learning” so that it is capable to create precise forgeries.

The software program in query is programmed, by means of deep understanding which is a vigorous method that requires exposing artificial intelligence to info, to analyse swathes of information sets of a specific topic – be it Instagram posts, YouTube videos and so forth – gathering info and establishing a extensive profile.

It is primarily based on that quite profile that the programme is capable to create photos or videos of the topic in query, capable of getting directed to say something, fully say or do something in the likeness of that topic.

This is since adequate info on the topic has been gathered such that it is capable to accurately simulate the subject’s speech patterns and facial look even if it does not have a operating model of the topic saying anything in specific.

It can nonetheless be programmed to depict the topic in a realistic way.

This would make it doable, for instance, to train the programme to create fake videos that depict Hollywood celebrities performing outrageous acts, American presidents saying the foulest of issues and public figures in compromising positions in a way that is fully indistinguishable from reality and most definitely could have you fooled.

The prospective destructiveness of deep fake technologies has nearly generally been repeatedly emphasised by critics ever due to the fact its modern day inception.

All through the years in which it has been active, customers have exploited the technologies to digitally manipulate current footage by superimposing the face of a specific particular person onto that quite footage.

This was, in reality, the goal it served in the technology’s early days.

Gaining notoriety on the Reddit web-site in 2017, an anonymous user posted digitally altered pornographic videos that made use of the faces of prominent celebrities on the web-site, generating it seem as although the celebrities in query have been themselves in the video.

The videos swiftly garnered public interest and have been produced viral.

The quite initially instance the technologies came to be made use of currently involved weaponisation, degrading innocent people today fully disassociated from the pornographic business and getting their identities forcefully implicated in these lewd videos.

Deep fake technologies, in the absence of strategic security parameters, enables for the widespread assault on human dignity to be facilitated with no challenge.

There have been additional situations of deep fake technologies getting made use of to develop sexually explicit content material modelled soon after higher-profile world-wide-web personalities.

Female streamers on Twitch, an on-line reside-streaming platform, suffered from the mass circulation of deep fakes that appropriated their likeness, causing a grievous upset in the world-wide-web neighborhood.

Due to the sinister mixture of each the viralising attributes of on-line media and the unadulterated skills of deep fake technologies, practically no action could be taken as the videos have been increasingly shared and replicated.

The subsequent democratisation of this technologies, which is produced accessible to the public, brought on a considerable shift in on-line media.

Realising its prospective for satire, reasonably “harmless” videos have been developed by world-wide-web customers for the purposes of parody.

The technologies was nevertheless in its early stages and in the eyes of the public, there appeared to be quite tiny danger in circulating videos that could be instantly identified as fraudulent if it was in the service of world-wide-web humour.

It would grow to be apparent, having said that, all through the years that the consequences of deep fake technologies have been not trivial and had certainly the prospective to instigate damages of close to-epic proportions.

In 2022, a fraudulent video of Ukraine President Volodymyr Zelenskyy demanding the surrender and outright acquiescence of Ukrainian soldiers to the Russian military was circulated on social media.

Ukrainian Television stations – in what appeared to be a geopolitical, retaliatory attack – have been hijacked and programmed to televise the fake broadcast in an try to bring about mass confusion.

Thankfully, the Ukrainian authorities appropriately took down the video and issued clarifications to the public at significant.

It is essential to note that even though the deep fake at the time was simply identifiable as fraudulent due to the fact the video had specific irregularities and distortions, it nonetheless demonstrated that it could be galvanised to jeopardise the integrity of a sovereign state.

It has also posed a threat to international organisations and institutions.

A particular person, who was capable to digitally alter his video feed such that it mimicked the likeness of the Mayor of Kyiv, was capable to dupe senior officials of the European Union into agreeing to conduct video calls.

This demonstrated that deep fakes could be exploited to carry out government espionage.

It could be firmly established hence that the technologies in query is certainly a matter of national safety regarding the government and its citizens.

It is on an upward trend of the continuous trajectory of technological involvement and if quite tiny is accomplished to strategically include its influence, it could quite properly contribute to the weakening of Malaysian safety, afflicting the lives of quite a few innocent Malaysians as they are the most vulnerable to this.

Deep fake technology’s prospective in the location of criminal malfeasance is limitless.

An sophisticated variant of this technologies could dupe monetary institutions into legitimising monetary institutions, circulate politically provocative content material to incite geopolitical tensions, facilitate identity theft, blackmail folks by means of the use of synthetic revenge porn and instigate a campaign of deliberate disinformation and misinformation. The list is not exhaustive.

In spite of the negatives of deep fake technologies, it would not be ideal to exclude discussions as to the positives that it could confer on society if strictly regulated.

Deep fake technologies could be made use of in the filmmaking and advertisement industries with the objective of generating realistic footage much more accessible from remote places.

It could also be incorporated into education and study enabling for much more simulations of historical re-enactments and experimentation.

What is required is the middle ground, 1 that recognises the detrimental effects of deep fake technologies even though simultaneously accommodating helpful advancements in technologies.

The government have to create a extensive technique to counteract and combat deep fake technologies.

1 of the priorities of the Communications and Digital Ministry is to take into consideration stricter legislation.

In the early months of 2023, the Cyberspace Administration of China – below the powers of the Chinese government – instituted new policy measures that outright outlawed the creation of deep fake media with no the explicit consent of customers.

National policies could also be modelled soon after these of the European Union and the US, which prohibit the dissemination of deep fakes in places that raise political issues and implicate people today in pornographic material.

There have to also be a consideration of an extension to present legislation that revises the definition of private information so that it is much more inclusive of much more places of the human situation in a way that prevents the digital mimicry of persons.

Given that the technologies in query is nevertheless in its infancy, there have to also be efforts to carry out national campaigns that spread awareness as to the existence of the technologies and its detrimental effects.

This could help the public in identifying much more sophisticated types of deep fake fraudulence.

Investments in the improvement of new technologies would be pivotal in this location.

Deep fake detection technologies would be immensely useful each to authorities and the public in getting capable to instantly report dangerous forgeries.

It is of essential significance that Malaysia strengthen information borders.

The current announcement by the government of the creation of the cyber safety commission could coincide with newfound research in the location of deep fake technologies.

As early as final year, Europe’s Policing Agency issued a warning more than the dangers of the deployment of deep fake technologies by foreign actors to undermine public trust in government institutions.

This ruptured connection in between the public and the government could bring about a rift and be additional encroached upon in a way that destabilises nations.

We have to take into consideration ourselves fortunate that we have the capacity to resolve prospective problems that deep fake technologies could bring about but there could quite properly be a time, if left to itself, when it would merely be also overwhelming to cease.

This circumstance hence have to be urgently addressed prior to it becomes the country’s future affliction.

Comments: letters@thesundaily.com