ML Meets Fashion

Meetings of machine learning fashion

Training models with MNIS datasets is often considered the "hello world" of machine learning. This has happened many times, but unfortunately, only one model does well on MNIS, which does not mean that it predicts high performance on other datasets, especially when we have most of the image data today that are more complex than handwritten ones.


Fashionable machine learning


Zalando decided to make it MNIS fashionable again, and recently released a fashion-mnist dataset. This is exactly the same format as the ‘regular’ MNIS data except for pictures of different clothing types, shoes, and bags. It is still in the middle of 10 categories and the images are still 2 by 28 pixels. Are in pixels.


Train a model to find out what kind of clothes are shown!



Line classifier


We'll start creating a line classification, and see how we do it. In general, we use the approximate framework of TensorFlow to easily write and maintain our code. As a reminder, we will load in the data, create our classification, and then run the training and evaluation. We will make some predictions directly from our local model.


Let's start by creating our model. We will flatten the dataset from 2xx2 to 1x7844 pixels, and create a feature column called pixels. This episode is similar to our flower_services, from plain and simple estimators.


Next, let us create our linear classification. We have 10 different potential classes to label, instead of the ones we used with Irish flowers.


To run our training, we must set up our dataset and input function. Tensorflow has a built-in utility for accepting salty arrays to generate input functions, so we'll take advantage of that.


And we will load it into our dataset using the input_data module. Point to the folder where you point.


We can now classify the classifier, trans () together with our classifier, input function, and dataset.


Finally, we run into a value step, which is how our model did. When we use the classic MNIST dataset, this model typically achieves about 91% accuracy. Anyway, FashionManist is a much more complex dataset, and we can only get accuracy in less than a decade, and sometimes less than that.



Going deeper


Swapping in DNNClassifier is a line change, and we can now re-run our training and evaluation to see if an intensive neuro network linear can perform at its best.


We learned what was discussed in the episode in, now we have to bring the tensorboard to see these two models side by side!

ens Tensorboard --logdir = model / fashion_manist /


(Browse at http: // localhost: 6006)



Tensorboard


Looking at the tensorboard, it looks like my deep model is not performing better than my linear. This is probably an opportunity to tune in to some of my hyperprimes as discussed in episode 2.


Maybe my model needs to be bigger to accommodate the complexity in this dataset? Or maybe I need to lower my education rate? Let's try it. With a little experimentation with these parameters, we can achieve more accuracy than our drawing model can achieve.



It takes a few more steps of training to achieve accuracy, but in the end it was worth it for high accuracy numbers.


Before the graphing model plateau depth network. Because deeper models are often more complex than linear ones, they can take longer to train.


At this stage, suppose we are happy with our model. We are able to export it and produce a scalable fashion-mnist classification API. You can watch the episode for more information on how to do that.


Predicting


Let's take a quick look at how we can make predictions using predictors. For the most part, this is how we call the train and how much we value; That's one of the great things about estimators - the single interface.


Note that this time we changed batch_size 1, num_epochs to 1, and incorrectly. This is because we like to make predictions one after the other, predicting all the data at once, securing the order. I took images from the middle of the evaluation dataset to try to predict with us.




I picked these up not only because they were in the middle, but because 2 of them went wrong in the model. Both shirts were supposed to be, but the model thinks the third example is the bag, and thinks it's the 5th example coat. You can see how challenging these examples are of handwritten numbers, if not for any reason other than the grease of the images.


Comments