AI an embarrassment?

With that start of 2019, I would like to point our attention to the danger we are posing to the world as AI community. I am not saying that the whole AI community is thoughtlessly progressing. No! The best part about our community is that we are thinking and debating openly about the steps we are taking towards solving intelligence. However there have been a few incidents which according to me were embarrassing. Some papers and works in AI has made me ashamed. I just don’t wish AI to be another nuclear bomb tragedy. 

DeepFake Algorithm: Fake Porn and Fake News-

Gal Gadot Deepfake

“with great power comes great responsibility” – Ben Parker

DeepFake surfaced on reddit by a user who wrote a script loosely based on something known as “Unsupervised Image Translation” and “human image synthesis”. We know that “deep learning” has revolutionized everything and it has taken fakeness to new heights as well.

Fake news and Fake porn are not a new thing but now they have “realism” in it. In last year 2018, we saw a lot of fake porn which were made by DeepFake algorithm. DeepFake uses Generative Adversarial Neural Networks(GANs) and by using this algorithm anyone can superimpose anyone’s face on any video they want.

It is not that this was not possible by photoshop and video editing etc. but DeepFake even matches emotions with a good accuracy. Secondly, traditional techniques required good amount of effort and expertise however by using this algo someone can easily develop a platform(like FakeApp) where you can take image of any person and pass a video and boom! you get a fake video of that person.

Right now, platforms like reddit, twitter and even pornhub are banning deepfake generated porn. It was Daisy Ridley‘s fake porn which emerged first. Then Gal Gadot‘s video simulation came in which the scene was with her step brother. Other unfortunate people were Emma WatsonKaty PerryTaylor Swift or Scarlett Johansson. These videos were debunked quickly but there are some community of folks who are trying to fix this algorithm so that it can generate more realistic looking videos. Shame!

It should be noted that this algorithm doesn’t only pose threat to celebs but also to normal people. This is a boon for “Revenge Porn”. It has become super easy for anyone to make a fake porn of someone you hate and want revenge. Just add One shot learning to DeepFake and you have made another cruel invention. Revenge porn is a topic which will take a whole series of blogs or even a book for me to describe how utterly odious this act is.

Another point to be noted here is that in the world of Cambridge Analytica and other forces which are being used by political parties to brainwash citizens on a massive scale, DeepFake will serve the purpose of fake news and malicious hoaxes aptly.

Coming from a country like India, I know that people do not have that much technical awareness and  they easily believe anything shared on WhatsApp groups. How easy it has become to show(read control) those people anything you want. DeepFake has made the problem of fake news and hoaxes more complicated than ever! Shame!

Concluding this topic I would like to say that as Scientists and Researchers we need to learn the art of hiding some research which can make the world a terrible place to live. We need to introspect and think a lot before releasing anything in the world because once it is on the internet, no matter how hard you try you won’t be able to undo it. Another point to note is that this problem is deeper than it appears. It is not just a technological problem. Many of us don’t understand a word “consent”. I won’t dive deeper in this post.

Unfortunately, ctrl+z doesn’t work in real world!

Slaughterbots: Armed AI –

Just google “slaughterbots” and you will find a documentary which shows dramatized near-future scenario where swarms of inexpensive microdrones use artificial intelligence and facial recognition to assassinate political opponents based on preprogrammed criteria.

I know! I know that it is a drama but If you can connect the dots then I see this inevitable.

Check this video out, I am amazed by what this guy is able to achieve I am inspired, Simply brilliant!  –

I would just like to point out that since GANs were developed with pure intentions but now they are being misused in algorithms like DeepFake, I have no doubts that these inventions in wrong hands can be disastrous.

If you think that this technology is too advanced, then recall that once only scientists were capable of using those monolithic Mainframes but today even a newborn is able to play with your iPhone. Simply stating, Simplification in technology is inherent. Technology de-complexes with time.

Another point I would like to make is that I am totally against the weaponization of Artificial Intelligence.  There should always be a human finger on the trigger. He can take last moment decisions. A human armed force can give chance to the opposition retreat, to surrender, to disarm or to even spare him. Right now even in Reinforcement Learning, we are working on reward model(+1/-1) and it seems really hard to me to score a human act in +1 or -1 only. Just clarifying, I am in no way defending terrorists or criminals. Law should take its due action. I am just saying that there should always be a human in the loop. Always!

Ping me up or comment below if I need to explain more here…

Mass Surveillance –

With the advent of amazing architectures in computer vision, almost everyone who wants to start a startup has the idea to create a hardware/software product to do mass surveillance.

“We do sentimental analysis, human pose estimation, large scale face recognition, human profiling etc” is a common sell pitch in most of the computer vision AI startups.

I won’t be debating in For or Against the topic of Mass Surveillance in this post but in case you are interested then you can read quickly about this here –

I just want to say that transparency cannot be only for citizens. Government should also decide to be transparent because they also shouldn’t have anything to hide right?

I know I know that many times in this post diversions have come up from the title. Sorry about that.

Concluding thoughts –

I have a belief, you are free to do a healthy debate upon this, that mindless research should be stopped or regulated heavily in de-centralized way. We definitely don’t want to develop another “nuclear bomb” just out of curiosity without thinking about the long term consequences.

There should be a UN Agreement or something as such between nations that weaponization of AI should never be done. To me personally global warming, evolution of AI etc pose bigger threat than building a wall or a building.

Education(not the grading system, the real education) should keep up with these things. Philosophy and other liberal arts should be taught to engineering students.

I might have come up as someone who is preaching idealism, I might agree with you on this. However I would consider this post as a success if you would starting thinking more about the consequences of your actions rather than just learning AI to get a job.

Cautious: You are dealing with something you yourself don’t fully understand. 

Yes, that’s what the state is of the deep neural networks right now. Let me explain a bit more on this. Take AlphaGo Zero as an example (I encourage you to read its paper written by DeepMind and published in Nature magazine). Even if you ask its author that given a particular condition what will be the exact state in the between the neural networks> they might not be able to answer it. That is because neural networks is still a black box for us. It is not yet in a condition where you can add a breaking point and debug the code after that as you would do in your python/cpp code by adding pdb/gdb.

I would advise all the student/aspiring researchers to take up AI because we need democratization in this field. We need people from all communities/races/genders etc to work in this field collection with humanity as a goal.

Now, does it mean that you should stop taking a “leap-of-faith” in research/exploration? No! definitely not! Many times the explorers have to take steps of which the consequences are unknown. This is the reason why we have moved to scientific age. This is the reason why we have discovered/invented things which have made our life comfortable. Leap of faith is the reason why we have made discoveries in medicine and that has helped us to live longer.

I am asking us to learn from the mistakes we have done in past and not repeat those with a new technology. I am just asking us to be careful!

Let me know your thoughts …

Happy Coding 🙂

4 thoughts on “AI an embarrassment?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s