Posts

Artificial intelligence (AI) - the ability of a digital computer.

Image
Artificial intelligence Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks typically related to an intelligent person. The term is often applied to projects of developing systems. The characteristics of intellectual processes are human characteristics, such as the ability to reason, invent, generalize, or learn from past experience. Since the development of the digital computer, it has been demonstrated that computers can be programmed to perform very complex tasks - for example, finding evidence for mathematical theorems or playing chess - with extreme efficiency. Yet, despite the steady advancement of computer processing speed and memory capacity, there are still no programs that match human flexibility in the tasks required for broad domains or more daily knowledge. On the other hand, some programs have achieved performance levels of human specialists and professionals to perform specific tasks, so that in this limited sense

TensorFlow Privacy : Learning with Differential Privacy for Training Data

Image
Introducing TensorFlow Privacy: Learning with different privacy for training data Today, we are excited to announce TensorFlow Privacy (GitHub), an open-source library that makes it easier for developers to not only train machine-learning models with privacy but also to advance the state of the art with machine learning. Strict privacy guarantee. Modern machine learning is increasingly used to create amazing new technologies and user experiences, many of which involve training machines to learn responsibility from sensitive data, such as personal photos or emails. Ideally, the parameters of trained machine-learning models should encode general patterns rather than facts about specific training examples. To ensure this, and to give strict privacy guarantees if the training data is sensitive, it is possible to use technology based on different privacy principles. In particular, when trained in user data, those technologies offer strict mathematical guarantees that the model wil

Upgrade Colab with More Compute

Image
How to upgrade collab with more calculations In a previous episode of AI Adventure, we looked at Colab as a great way to get started in the world of data science and machine learning. But there are some models that should run longer (Colab instance will reset after several hours), you want to get more memory or GPUs than provided for free. The question then becomes: how can we hook up the frontend of Colab with more compute power? We're using Google Cloud Platform's Deep Learning V VMs to power your collaborative environment. After a while, we looked at how to create Deep Learning V VMs with a choice of machine learning frameworks. We will use that example today and connect the collab to it so that we can use the resources in that machine. To get started, we should definitely build our VM, and make it big! Go to Cloud Marketplace and find Deep Learning V VM and choose it. I'm going to call minecolab-vm. And for a better solution, let's include 1 16 CPU with 160 GB memor

TensorFlow Hub: A Library for Reusable Machine Learning Modules in TensorFlow

Image
Introducing TensorFlow Hub: Library for Reusable Machine Learning Modules at TensorFlow One of the most fundamental things in software development is the idea of ​​a store of shared code that is easy to ignore. As programmers, libraries instantly make us more effective. In a sense, they change the process of problem-solving programming. When using the library, we often think of programming in terms of building blocks - or modules - that can be glued together. How can a library be considered a machine education developer? Of course, in addition to the share to code, we also want to share pre-trend models. Sharing pre-trained models make it possible for developers to optimize for their domain, without access to computer resources or data used to train the model in the original hands. For example, the NASNet train took thousands of GPU-hours. By sharing the weights learned, a model developer can make it easier for others to reuse and build their work. It's a library idea for machine e