Facebook F8 2019, Day 2: Why Facebook Need To Reinvent AI

Artificial Intelligence has grown over the years and Facebook has integrated AI developments and concepts into a variety of its features including camera and news feed analysis. Artificial Intelligence is supposed to make people’s life easier by optimizing their resource utilization and simplifying their experience. In social media, AI has been used to introduce users to a next-gen communication and interaction medium, thus, helping them to connect to their loved ones smartly. But again, Facebook’s AI has miserably failed to filter news feed, protect user privacy, and has violated ethics by letting hate speech spread freely on the biggest social media platform in the world. On Day 2 of Facebook’s F8 Keynote Event, AI and AI security covered a majority of the event, led by Facebook CTO  Mike Schroepfer. Let’s have a look at what Facebook plans to do with AI in the future.

Artificial Intelligence at Facebook

Artificial Intelligence at Facebook

Facebook’s core use of Artificial Intelligence is to examine the news feed. AI technology is used to make sure that any content representing hate speech, violence, racism, political information, and anything else that violates Facebook values and also laws across the globe. AI is also used to read user searches, preferences, and updates, which allows Facebook to engage users with the right kind of advertisements and filter friend suggestions and sponsored posts to add in their feed. This is what support Facebook’s business model.

How Facebook Use AI

How Facebook Use AI
Image: Tech Crunch

Facebook has been using advanced Natural Language Processing (NLP) to train its AI and help it understand the difference between different type of content in all sorts of format. NLP uses multilingual embedding that allows the AI to understand the content in different languages and remove harmful content from the News Feed across the globe.

The Difficulty

The Difficulty
Image Source: Wired

Despite having a multilingual embedding, it’s highly difficult for Facebook AI to capture all content. There are more than six thousand languages in the world that are written and spoken by people from an almost equal number of ethnic backgrounds. Training AI with such a vast knowledge of language understanding is highly difficult. AI used by Facebook learns from labeled data uploaded in the system and learns from it. The data is in vast amounts and labeling it for effective training of the AI is a hard task for developers. And the larger the amount of data gets, the more is the room for human error.

Learning Photo and Video Forms of Data Is Another Headache

Learning Photo and Video Forms of Data Is Another Headache
Image Source: Facebook Newsroom

Facebook has grown wide when it comes to understanding photographs, especially after the acquisition of Instagram. Facebook has embedded concepts of Computer Vision and Panoptic Feature Pyramid Network (Panoptic FPN) to understand the architecture of all aspects of images. The Panoptic FPN has even grown to understand any image structure along with image background, which has made Facebook’s AI even more capable of filtering harmful content available on the web in the form of images.

But understanding video content has been a major drawback for Facebook AI and its machine learning has failed at multiple levels when it comes to avoiding hate speech and violence via videos uploaded and shared on Facebook. Though Facebook claims that it has tried to use hashtags as data labels to help AI learn from the video content as well, or at least determine the relevance of the data from the information provided by the hashtags; however, it has not been up to the mark.

Where Does Facebook AI Fail?

Where Does Facebook AI Fail
Image Source: TNW

Facebook AI runs on a Supervised Learning framework, whereby the AI learns from datasets which are labeled by human developers. In the last few years, the population on Facebook has almost reached one-third of the total population on the planet.  This means a surplus amount of data in written, audio, and video format, being uploaded on Facebook in a fraction of seconds. Meanwhile, AI has a job to evaluate this content in real-time. Since there is human involvement, thorough research can also lead to lags in data labeling or flaws in datasets read by the AI. This may lead to errors in content detection and AI may not work in the same manner for all the user content uploaded on the News Feed.

Self-Supervised AI: Facebook’s backup Plan to Remove Harmful Content

Self-Supervised AI- Facebooks backup Plan to Remove Harmful Content
Image Source: CNet

On Day 2 of F8 2019, CTO Mike Schroepfer announced that Facebook is leading researchers and developers to design systems that support self-supervised learning. Such an AI system would be able to comprehend information without datasets and labeling, thus making it self-relying and self-aware. The self-supervised AI can be fed huge volumes of data. However, it isn’t necessary that the entire data would be raw and would be fed as it is. In order to design training modules for the AI machines, it would be necessary that users remove some bits of information from the content or data and then have machine realize the missing bits on its own. This type of training would allow the machine to increase the relevance of the content going on Facebook and would allow it to scrutinize it against policies and laws in a better manner.

Also Read:-

Inclusive Training for AI: Making Facebook’s AR Business More Ethical and Secure

Inclusive Training for AI

With Portal and Spark AR, Facebook plans big ventures in AR and VR tech businesses. However, smart techs require smart AI to offer a prosperous user experience. Portal Cameras are designed on augmented reality concepts and are to be key for introducing next-gen facial recognition and smart video chat platforms. But, it has been found that these cameras and AR/VR headsets have been biased in understanding commands from people. The lenses have been found to distinguish between skin tones and genders while failing to offer similar user experience to all users. Plus, this promotes racial and gender discrimination. After all that criticism across the globe, this is the last thing Facebook wants.

Inclusive Training for AI-1

So, Facebook has announced that it would build an inclusive AI to eradicate this issue. For this, they have Nigerian-American, Lade Obamehinti leading Facebook’s AR/VR section’s strategy department. The researchers would be testing AI cameras in different light settings with people having different skin tones, to make sure that the AI has inclusivity embedded in its learning modules without any error or fluctuations.

Why does Facebook need To Work On AI?

Why does Facebook need To Work On AI
Image Source: The Tribune

Facebook let people’s data be subjected to abuse and that data was sold to marketers and advertisers right under its official’s noses. Then Facebook suffered a power surge and it also had passwords of millions of users exposed in plain text. And then, the levels of AI incompetence and ignorance of developers were exceeded when a man in Christchurch, NZ live-streamed a video on Facebook as he shot people in a mosque to death multiple times. The video was streamed, and it stayed in FB news feed for a sufficient period of time to let people download it.

Facebook has already suffered a lot of backlashes and has lost billions in the process. This is the reason that Mark Zuckerberg used the word “privacy” a million times on Day 1 of the F8 Event this year. Facebook has lost the leverage and has bypassed the limits of patience of both the users and the policymakers across the globe. If Facebook does not get its AI to comprehend the content, it might face a shut down in the coming years.

Maybe, that was the sole reason that Facebook had its CTO on-stage for an entire day, assuring users by letting them know what Facebook is trying to do to protect them by all means.

Will This Move Be A Success?

Will This Move Be A Success
Image Source: Fortune

Zuckerberg accepted himself that these changes are not going to happen overnight. Facebook might have to change its business model to completely eradicate the flaws in its AI and remove hate content from Facebook forever. Research is ongoing, and development of such training modules is in early stages. However, what one thing can be said is that Facebook has started taking such matters seriously. And if it is kept at priority, the company might be able to deliver the promises that its officials have made in this year’s F8.

It’s high-time Facebook comes up with new technology and better AI training modules to understand the content on FB. Especially given that the users on Facebook and other wholly owned platforms and products tend to increase day-by-day. Since Facebook has turned into an advertisement and influencing portal along with chat and communication service, the data on its servers have gone beyond categorization and data labeling limits. Whether these moves by Facebook Inc. would help the company or not? The answer to this may not be too simple. Till then, what we can do is wait and see how Mr. Zuckerberg live up to his words.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe Now & Never Miss The Latest Tech Updates!

Enter your e-mail address and click the Subscribe button to receive great content and coupon codes for amazing discounts.

Don't Miss Out. Complete the subscription Now.