- eBook:Generative AI with Python and TensorFlow 2: Harness the power of generative models to create images, text, and music
- Author:Joseph J Babcock, Raghav Bali
- Data:May 11, 2021
- Pages:453 pages
- Format:PDF, ePUB
- Explore creative and human-like capabilities of AI and generate impressive results
- Use the latest research to expand your knowledge beyond this book
- Experiment with practical TensorFlow 2.x implementations of state-of-the-art generative models
Book DescriptionIn recent years, generative artificial intelligence has been instrumental in the creation of lifelike data (images, voice, video, music, and text) from scratch. In this book you will unpack how these powerful models are created from relatively simple building blocks, and how you might adapt these models to your own use cases.
You will begin by setting up clean containerized environments for Python and getting to grips with the fundamentals of deep neural networks, learning about core concepts like the perceptron, activation functions, backpropagation, and how they all tie together. Once you have covered the basics, you will explore deep generative models in depth, including OpenAI’s GPT-series of news generators, networks for style transfer and deepfakes, and synergy with reinforcement learning.
As you progress, you will focus on abstractions where useful, and understand the “nuts and bolts” of how the models are composed in code, underpinned by detailed architecture diagrams. The book concludes with a variety of practical projects to generate music, images, text, and speech using the methods you have learned in prior sections, piecing together TensorFlow layers, utility functions, and training loops to uncover links between the different modes of generation.
By the end of this book, you will have acquired the knowledge to create and implement your own generative AI models.
What you will learn
- Implement paired and unpaired style transfer with networks like StyleGAN
- Use facial landmarks, autoencoders, and pix2pix GAN to create deepfakes
- Build several text generation pipelines based on LSTMs, BERT, and GPT-2, learning how attention and transformers changed the NLP landscape
- Compose music using LSTM models, simple generative adversarial networks, and the intricate MuseGAN
- Train a deep learning agent to move through a simulated physical environment
- Discover emerging applications of generative AI, such as folding proteins and creating videos from images
Who This Book Is ForThis book will appeal to Python programmers, seasoned modelers, and machine learning engineers who are keen to learn about the creation and implementation of generative models. To make the most out of this book, you should have a basic familiarity with probability theory, linear algebra, and deep learning.
Chapter 2: Setting Up a TensorFlow Lab
Chapter 3: Building Blocks of Deep Neural Networks
Chapter 4: Teaching Networks to Generate Digits
Chapter 5: Painting Pictures with Neural Networks Using VAEs
Chapter 6: Image Generation with GANs
Chapter 7: Style Transfer with GANs
Chapter 8: Deepfakes with GANs
Chapter 9: The Rise of Methods for Text Generation
Chapter 10: NLP 2.0: Using Transformers to Generate Text
Chapter 11: Composing Music with Generative Models
Chapter 12: Play Video Games with Generative AI: GAIL
Chapter 13: Emerging Applications in Generative AI