Natural Language Processing with TensorFlow
NLP with TensorFlow
TensorFlow™ is an open source software library for numerical computation using data flow graphs.
SyntaxNet is a neural-network Natural Language Processing framework for TensorFlow.
Audience
This course is targeted at Developers and engineers who intend to work with SyntaxNet and Word2Vec models in their TensorFlow graphs.
After completing this course, delegates will be able to:
- understand TensorFlow’s structure and deployment mechanisms
- be able to carry out installation / production environment / architecture tasks and configuration
- be able to assess code quality, perform debugging, monitoring
- be able to implement advanced production like training models, embedding terms, building graphs and logging
Here is what you will get with this course:
Getting Started
- Setup and Installation
TensorFlow Basics
- Creation, Initializing, Saving, and Restoring TensorFlow variables
- Feeding, Reading and Preloading TensorFlow Data
- How to use TensorFlow infrastructure to train models at scale
- Visualizing and Evaluating models with TensorBoard
TensorFlow Mechanics
- Prepare the Data
- Download
- Inputs and Placeholders
- Build the Graph
- Inference
- Loss/Accuracy
- Training
- Train the Model
- Graph
- Session
- Train Loop (Epochs)
- Evaluate the Model
- Build the Eval Graph
- Eval Output
Advanced Usage
- Threading and Queues
- Distributed TensorFlow
- Writing Documentation and Sharing your Model
- Customizing Data Readers
- Using GPU's
- Manipulating TensorFlow Model Files
TensorFlow Serving
- Introduction
- Basic Serving Tutorial
- Advanced Serving Tutorial
- Serving Inception Model Tutorial
Getting Started with SyntaxNet
- Parsing from Standard Input
- Annotating a Corpus
- Configuring the Python Scripts
Building an NLP Pipeline with SyntaxNet
- Obtaining Data
- Part-of-Speech Tagging
- Training the SyntaxNet POS Tagger
- Preprocessing with the Tagger
- Dependency Parsing: Transition-Based Parsing
- Training a Parser Step 1: Local Pretraining
- Training a Parser Step 2: Global Training
Vector Representations of Words
- Motivation: Why Learn word embedding?
- Scaling up with Noise-Contrastive Training
- The Skip-gram Model
- Building the Graph
- Training the Model
- Visualizing the Learned Embeddings
- Evaluating Embeddings: Analogical Reasoning
- Optimizing the Implementation
Requirements for live participation:
- Customary computer or laptop (64-bit), headset, webcam
- Stable internet connection
- NVIDIA GPUs
- At least 2 GB of RAM
Who this course is for:
- Working knowledge of python
Duration 5 Days Price per participant 4.250 EUR | |
Language/Documentation English Germany | |
Participants
|
Contact
If you are interested in a company-specific custom development and would like to find out more, please feel free to get in touch with us.
Give us a call on: +49 (0) 176 310 693 62
or send an email to: info@inovaitec.com
Alternatively, You can fill out our contact form here. We look forward to hearing from you.