Contact Person : Walton-cara
Phone Number : 15986872308
July 18, 2022
Deploying machine learning (ML) is a multi-step process. It involves selecting a model, training it for a specific task, validating it with test data, and then deploying and monitoring the model in production. Here, we'll discuss these steps and break them down to introduce you to ML. ML refers to systems that, without explicit instruction, are capable of learning and improving. These systems learn from data to perform a particular task or function. In some cases, learning. or more specific training, occurs in a supervised manner where incorrect outputs result in adjusting the model to nudge it toward the correct output. In other cases, unsupervised learning occurs where the system organizes the data to reveal previously unknown patterns. Most ML models follow these two paradigms (supervised vs. unsupervised learning). Let's now dig into what is meant by a model and then explore how data becomes the fuel for machine leamning. Machine-Learning Model A model is an abstraction of a solution for machine learning. The model defines the architecture。 which, once trained, becomes an implementation. Therefore, we don't deploy models. We deploy implementations of models trained from data (more on this in the next section). So models plus data plus training equal instances of ML solutions (Figure1). translation is required. For example, feeding text data into a deep- learning network requires encoding words into a numerical form that is commonly a high-dimensional vector given various words that could be used. Similarly, outputs might require translation from a numerical form back into a textual form. ML models come in many types,including neural network models,Bayesian models, regression models, clustering models, and more. The model that you choose is based upon the problem at hand. In the context of neural networks, models range from shallow multi- layer networks to deep neural networks that include many layers
of specialized neurons (processing units). Deep neural networks also have a range of models available based upon your target application.
●If your application is focused on identifying objects within images, then the Convolutional Neural Network (CNN) is an ideal model. CNNs have been applied to skin-cancer detection and outperform the average dermatologist.
●If your application involves predicting or generating complex sequences (such as human language sentences), then Recurrent Neural Networks (RNN) or Long-Short- Term-Memory networks (LSTM) are ideal models. LSTMs have also been applied to machine translation of human languages.
●If your application involves descrbing the contents of an image in human language, then a combination of a CNN and an LSTM can be used (where the image is fed into the CNN and the output of the CNN represents the input to the LSTM, which emits the word sequences).
●If your application involves generating realistic images (such as landscapes or faces), then a Generative Adversarial Network (GAN) represents the current stat-of-the-art model. These models represent some of the more popular deep neural network architectures in use today. Deep neural networks are popular because they can accept unstructured data such as images,video, or audio information. The layers within the network construct a hierarchy of features that allow them to clasify very complex information. Deep neural networks have demonstrated state-of-the-art performance over a wide number of problem domains. But like other ML models, their accuracy is dependent upon data. Let's explore this aspect next.
Data and training
Data is the fuel that drives machine learning, not just in operation but also constructing an ML solution through model training. In the context of training data for deep neural networks, it's important to explore the necessary data in the context of quantity and quality. Deep neural networks require large amounts of data for training. One rule of thumb for image-based classification is 1,000 images
per class. But the answer is dependent upon the complexity of the model and tolerance for error. Some examples from production ML solutions yield a spectrum of dataset sizes. A facial detecti on and recognition system required 450,000 images, and a question- and-answer chatbot was trained with 200,000 questions paired with 2 million answers. Smaller datasets can also suffice based upon the problem being solved. A sentiment analysis solution that detrmines the polarity of opinion from written text required only tens of thousands of samples. Data quality is just as important as the quantity. Given the large datasets required for training, even small amounts of erroneous training data can lead to a poor solution. Depending upon the type of data necessary, your data might go through a cleansing process. This ensures that the dataset is consistent, lacks duplicate data, is accurate, and complete (lacks invalid or incomplete data). Tools exist to support this process. Validating data for bias is also important to ensure that data does not lead to a biased ML solution. ML training operates on numerical data, so a pre-processing step can be required depending upon your solution. For example, if your data is human language, it must first be translated into a numerical form to process. Images can be pre-processed for consistency. For example, images fed into a deep neural network would be resized and smoothed to remove noise (among other operations). One of the biggest problems in ML is acquiring a dataset to train your ML solution. This could be the largest endeavor depending upon your problem because it might not exist and require a separate effort
to capture. Finally, the dataset should be segmented between training data and test data. The training portion is used to train the model, and once trained, the test data is used to validate the accuracy of the solution