Why GitHub Copilot isn’t as it seems

,

GitHub recently announced their new artificial intelligence (AI) coding assistant, however it has already run into hot water as it has been found to be stealing code.

How it is trained

AI needs to be trained in order to be effective, and because GitHub is the largest code repository in the world, they have endless code that they can feed the AI algorithm in order to train it how to effectively write code.

This is not without its faults however. Poor quality code can enter the system and potentially cause issues, so only high quality and verified working code should be used to get the best results.

Why it is problematic

Because the AI has been trained using other people’s code, it effectively uses blocks from what it has learned to piece together what it needs to produce.

This is problematic as there have been multiple reports of it taking entire sections of code un-edited from programmers projects, leaving in names, links, and other personal information.

While it does correctly produce the correct results ~60% of the time, it still has a long way to go in terms of development.

Will it take my job?

In order to use GitHub Copilot, it requires human input and knowledge in order to build out effective applications. It is intended to be used as an aid, for quicker development and not as a replacement for a human developer.

AI can not for-see or identify changes that might be required in the future, and can not dynamically work with a client to get what they are looking for.

For now, GitHub Copilot doesn’t have a place in industry due to how new and unpolished it is. However in the future it could well be a useful tool to assist developers.