Can We Replicate A.I. Powered Machines?

It has been found out that most of the researchers don’t report the source code that was used to teach their AI which is troubling other scientists because they cannot just replicate the results and work further.

Simply put, the soaring field of Artificial Intelligence is grappled with replication crisis. This may sound normal to a few people, but for AI enthusiasts, this is no less than a disaster. But why do we need replication or backtracking of the researches done? Read further to know more!

What Is Replication?

A process in which a researcher can backtrack the model/software/algorithm or any material of research is known as replication. The field of AI is becoming an exception because doing so is becoming an uphill task for the professionals. Without backtracking we may get stuck!

Why Is Backtracking So Important?

Unlike popular belief, replication does have its own advantages. You can easily get to the results to verify them or propose changes to the paradigm. This has worked wonders for the people who were researching in the field of automation as they could easily modify the machines used and upgraded them for good.

So, if you ask us is it possible to replicate the studies on AI, we would say, “No you cannot.”

source: kickstarter

Researchers usually don’t share the source code. According to Nicolas Rougier, (a computational neuroscientist at France’s National Institute for Research in Computer Science and Automation), people outside the field may consider that there are some strict rules that might yield the desired results, but the people who are working on this know the real struggle. However, we are in no position to blame anyone for not sharing the source code because those may be subject of copyright to a company or sometimes, the researcher aims to be the first to present any development to the field. But things get quite complicated when the software or machine is powered by machine learning algorithm. Because these machines learn through experiences and other simulations, you will never get anything same as other machines. So, even if in the rarest of the rare case, you get the source code, you’ll never be able to get the desired result unless you have trained your machine similarly.

This is more or less a hypothetical situation because you’ll never get your hands on the methodology which was used to train the machine earlier! Trust us when we say this, but nobody has time to test algorithms under every possible condition, or the sample space defined in the document. This is one of the reasons why AI projects take much longer time to design and develop as compared to the conventional machines or projects.

 

There is no denying the fact that we need to change this culture because without it growing exponentially will remain a far-fetched dream!

Is This Way Optimal?

Professionals say that it needs to be secret or shared with only selected few so that our safety and existence is not at stake! But how will we know that the ones with whom the source code has been shared are not going to misuse it? If you are thinking that you can design a machine learning algorithm for the same, then too you should know that it is not reached that mark yet. And hence, you cannot rely on an algorithm to detect human behavior.

source: bgr

 

What Should Be Done Now?

We might think that being open about these concepts will reduce the hullabaloo, but contrary to this belief, this will be no less than a disaster. The terrorist and not so sane minds will reverse engineer and use it for mass destruction. We can just recommend that to form a board for testing the algorithms same as the medical industry operates. They don’t release the composition of the medicine unless it has been tested and approved. If the same is done to the artificial intelligent machines then things will not be as dreadful as predicted.

We can also take initiative and instead of hiding behind the propriety laws, working together will be fruitful. In conclusion, we should not forget that machines are not biased and if the “judgement day” is being awaited, then we all are at risk. Working together and supervised learning are the only help left to us.

Also Read: IBM Watson: The Next Big Thing in Healthcare Technology

Conclusion

We cannot replicate the studies, but machines are able to do so. In late 2017, we witnessed that an AI helped to create another of its kind. This AI powered machine was a project known as AutoML, on which Google was working on, and hence it marked the next big step for the AI industry. The development made is worth celebrating, but while doing so we are ignoring one important fact, that is, because the development is automated, the software is becoming far more complex for humans to understand.

If the same continues, then we are not far from the day when machines will create themselves without human input. Is it dangerous? We think so! What if we are busy backtracking and machines start creating their clones! Keep your eyes open fellas!

 

In case you think otherwise, or have some views on this topic that are worth sharing, then drop them in the comments section below!

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe Now & Never Miss The Latest Tech Updates!

Enter your e-mail address and click the Subscribe button to receive great content and coupon codes for amazing discounts.

Don't Miss Out. Complete the subscription Now.