Researchers revamp fundamental methods used in machine learning
Like reading and sorting through lecture notes to better understand the topics covered in a class, computers use machine learning to decipher, categorize and respond to information.
Machine learning, a branch of artificial intelligence, explores the ability of computer systems to learn from data, identify patterns and make decisions without much need for human interaction and guidance.
To optimize this learning process, Dr. Shahin Shahrampour, assistant professor in the Department of Industrial and Systems Engineering at Texas A&M University, partnered with Dr. Vahid Tarokh, professor in the electrical and computer engineering department at Duke University, to revamp the fundamental pattern analysis algorithms, called kernel methods.
Kernel methods are used for pattern recognition in machine learning to make them more adept and applicable to the constantly changing world of technology.
While kernel methods are sufficient when dealing with small to moderate datasets, Shahrampour explained that they are not scalable to a large enough degree to be effectively applied to more advanced systems.
The boost in core efficiency that Shahrampour’s and Tarokh’s updated algorithms propose will enhance the cornerstone kernel methods that program and train computer systems to recognize patterns and apply them to future variables. This will help machine learning keep up with the rapid innovations in computer technology.
“Even though there has been a lot of advancements in hardware technology that allow computers to process data faster and faster, it's always good to have methods that are computationally efficient,” said Shahrampour.
Shahrampour further explained that having a better understanding of kernel methods may also help overcome the limitations that hinder the expansion and integration of machine learning into deep learning research.
Heavily reliant upon efficient and accurate machine learning, the ultimate goal of deep learning is to emulate the human mind through a neural network of computational models and codes in order to anticipate activity, behavior and trends.
“Deep learning has been tremendously successful in many applications such as natural language processing and image pattern recognition. Many researchers in academia and companies like Google and Microsoft have been working on understanding the reason behind this success,” said Shahrampour. “And I think there is a way to understand deep learning by going back and rethinking some of the classic results from kernel methods.”
Their research was published at the 2018 Conference on Neural Information Processing Systems in Montréal, Québec, in a paper titled “Learning Bounds for Greedy Approximation with Explicit Feature Maps from Multiple Kernels.”