-
>
山西文物日歷2025年壁畫(特裝版)
-
>
老人與海
-
>
愛的教育
-
>
統編高中語文教材名師課堂教學實錄
-
>
岳飛掛帥
-
>
陽光姐姐小書房.成長寫作系列(全6冊)
-
>
名家經典:水滸傳(上下冊)
深度學習基礎 版權信息
- ISBN:9787564175177
- 條形碼:9787564175177 ; 978-7-5641-7517-7
- 裝幀:一般膠版紙
- 冊數:暫無
- 重量:暫無
- 所屬分類:>
深度學習基礎 內容簡介
Google、微軟和Facebook等公司正在積極發展內部的深度學習團隊。對于我們而言,深度學習仍然是一門非常復雜和難以掌握的課題。如果你熟悉Python,并且具有微積分背景,以及對于機器學習的基本理解,本書將幫助你開啟深度學習之旅。 * 檢驗機器學習和神經網絡基礎 * 學習如何訓練前饋神經網絡 * 使用TensorFlow實現你的第1個神經網絡 * 管理隨著網絡加深帶來的各種問題 * 建立神經網絡用于分析復雜圖像 * 使用自動編碼器實現有效的維度縮減 * 深入了解從序列分析到語言檢驗 * 掌握強化學習基礎
深度學習基礎 目錄
1. The Neural Network
Building Intelligent Machines
The Limits of Traditional Computer Programs
The Mechanics of Machine Learning
The Neuron
Expressing Linear Perceptrons as Neurons
Feed-Forward Neural Networks
Linear Neurons and Their Limitations
Sigmoid, Tanh, and ReLU Neurons
Softmax Output Layers
Looking Forward
2. Training Feed-Forward Neural Networks
The Fast-Food Problem
Gradient Descent
The Delta Rule and Learning Rates
Gradient Descent with Sigmoidal Neurons
The Backpropagation Algorithm
Stochastic and Minibatch Gradient Descent
Test Sets, Validation Sets, and Overfitting
Preventing Overfitting in Deep Neural Networks
Summary
3. Implementing Neural Networks in TensorFIow
What Is TensorFlow?
How Does TensorFlow Compare to Alternatives?
Installing TensorFlow
Creating and Manipulating TensorFlow Variables
TensorFlow Operations
Placeholder Tensors
Sessions in TensorFlow
Navigating Variable Scopes and Sharing Variables
Managing Models over the CPU and GPU
Specifying the Logistic Regression Model in TensorFlow
Logging and Training the Logistic Regression Model
Leveraging TensorBoard to Visualize Computation Graphs and Learning
Building a Multilayer Model for MNIST in TensorFlow
Summary
4. Beyond Gradient Descent
The Challenges with Gradient Descent
Local Minima in the Error Surfaces of Deep Networks
Model Identifiability
How Pesky Are Spurious Local Minima in Deep Networks?
Flat Regions in the Error Surface
When the Gradient Points in the Wrong Direction
Momentum-Based Optimization
A Brief View of Second-Order Methods
Learning Rate Adaptation
AdaGrad——Accumulating Historical Gradients
RMSProp——Exponentially Weighted Moving Average of Gradients
Adam——Combining Momentum and RMSProp
The Philosophy Behind Optimizer Selection
Summary
5. Convolutional Neural Networks
Neurons in Human Vision
The Shortcomings of Feature Selection
Vanilla Deep Neural Networks Don't Scale
Filters and Feature Maps
Full Description of the Convolutional Layer
Max Pooling
Full Architectural Description of Convolution Networks
Closing the Loop on MNIST with Convolutional Networks
Image Preprocessing Pipelines Enable More Robust Models
Accelerating Training with Batch Normalization
Building a Convolutional Network for CIFAR-10
Visualizing Learning in Convolutional Networks
Leveraging Convolutional Filters to Replicate Artistic Styles
Learning Convolutional Filters for Other Problem Domains
Summary
6. Embedding and Representation Learning
Learning Lower-Dimensional Representations
Principal Component Analysis
Motivating the Autoencoder Architecture
Implementing an Autoencoder in TensorFlow
Denoising to Force Robust Representations
Sparsity in Autoencoders
When Context Is More Informative than the Input Vector
The Word2Vec Framework
Implementing the Skip-Gram Architecture
Summary
7. Models for Sequence Analysis
Analyzing Variable-Length Inputs
Tackling seq2seq with Neural N-Grams
Implementing a Part-of-Speech Tagger
Dependency Parsing and SyntaxNet
Beam Search and Global Normalization
A Case for Stateful Deep Learning Models
Recurrent Neural Networks
The Challenges with Vanishing Gradients
Long Short-Term Memory (LSTM) Units
TensorFlow Primitives for RNN Models
Implementing a Sentiment Analysis Model
Solving seq2seq Tasks with Recurrent Neural Networks
Augmenting Recurrent Networks with Attention
Dissecting a Neural Translation Network
Summary
8. Memory Augmented Neural Networks
Neural Turing Machines
Attention-Based Memory Access
NTM Memory Addressing Mechanisms
Differentiable Neural Computers
Interference-Free Writing in DNCs
DNC Memory Reuse
Temporal Linking of DNC Writes
Understanding the DNC Read Head
The DNC Controller Network
Visualizing the DNC in Action
Implementing the DNC in TensorFlow
Teaching a DNC to Read and Comprehend
Summary
9. Deep Reinforcement Learning
Deep Reinforcement Learning Masters Atari Games
What Is Reinforcement Learning?
Markov Decision Processes (MDP)
Policy
Future Return
Discounted Future Return
Explore Versus Exploit
Policy Versus Value Learning
Policy Learning via Policy Gradients
Pole-Cart with Policy Gradients
OpenAI Gym
Creating an Agent
Building the Model and Optimizer
Sampling Actions
Keeping Track of History
Policy Gradient Main Function
PGAgent Performance on Pole-Cart
Q-Learning and Deep Q-Networks
The Bellman Equation
Issues with Value Iteration
Approximating the Q-Function
Deep Q-Network (DQN)
Training DQN
Learning Stability
Target Q-Network
Experience Replay
From Q-Function to Policy
DQN and the Markov Assumption
DQN's Solution to the Markov Assumption
Playing Breakout wth DQN
Building Our Architecture
Stacking Frames
Setting Up Training Operations
Updating Our Target Q-Network
Implementing Experience Replay
DQN Main Loop
DQNAgent Results on Breakout
Improving and Moving Beyond DQN
Deep Recurrent Q-Networks (DRQN)
Asynchronous Advantage Actor-Critic Agent (A3C)
UNsupervised REinforcement and Auxiliary Learning (UNREAL)
Summary
Index
深度學習基礎 作者簡介
Nikhil Buduma是Remedy的聯合創始人和首席科學家,該公司位于美國舊金山,旨在建立數據驅動為主的健康管理新系統。16歲時,他在圣何塞州立大學管理過一個藥物發現實驗室,為資源受限的社區研發新穎而低成本的篩查方法。到了19歲,他是國際生物學奧林匹克競賽的兩枚金牌獲得者。隨后加入MIT,在那里他專注于開發大規模數據系統以影響健康服務、精神健康和醫藥研究。在MIT,他聯合創立了Lean On Me,一家全國性的非營利組織,提供匿名短信熱線在大學校園內實現有效的一對一支持,并運用數據來積極影響身心健康。如今,Nikhil通過他的風投基金Q Venture Partners投資硬科技和數據公司,還為Milwaukee Brewers籃球隊管理一支數據分析團隊。 本書內容貢獻者Nick Locascio是一位深度學習顧問、作家和研究者。Nick在MIT的Regina Barzilay實驗室獲得了本科和工程碩士學位,專業從事NLP和計算機視覺研究。他曾工作于多個項目,從訓練神經網絡到編寫自然語言提示,甚至與MGH Radiology部門合作將深度學習應用于乳腺X線攝影的醫學輔助診斷。Nick的工作已被MIT News和CNBC報道。在其閑暇之余,Nick為財富500強企業提供私人的深度學習咨詢服務。他還聯合創立了標志性的MIT課程6.S191 Intro to Deep Learning,教過300余名學生,聽眾包括博士后和教授。
- >
羅庸西南聯大授課錄
- >
回憶愛瑪儂
- >
我與地壇
- >
大紅狗在馬戲團-大紅狗克里弗-助人
- >
企鵝口袋書系列·偉大的思想20:論自然選擇(英漢雙語)
- >
龍榆生:詞曲概論/大家小書
- >
月亮虎
- >
月亮與六便士