神經(jīng)網(wǎng)絡(luò)與機(jī)器學(xué)習(xí)(英文版.第3版)
-
>
決戰(zhàn)行測(cè)5000題(言語(yǔ)理解與表達(dá))
-
>
軟件性能測(cè)試.分析與調(diào)優(yōu)實(shí)踐之路
-
>
第一行代碼Android
-
>
深度學(xué)習(xí)
-
>
Unreal Engine 4藍(lán)圖完全學(xué)習(xí)教程
-
>
深入理解計(jì)算機(jī)系統(tǒng)-原書(shū)第3版
-
>
Word/Excel PPT 2013辦公應(yīng)用從入門到精通-(附贈(zèng)1DVD.含語(yǔ)音視頻教學(xué)+辦公模板+PDF電子書(shū))
神經(jīng)網(wǎng)絡(luò)與機(jī)器學(xué)習(xí)(英文版.第3版) 版權(quán)信息
- ISBN:9787111265283
- 條形碼:9787111265283 ; 978-7-111-26528-3
- 裝幀:暫無(wú)
- 冊(cè)數(shù):暫無(wú)
- 重量:暫無(wú)
- 所屬分類:>
神經(jīng)網(wǎng)絡(luò)與機(jī)器學(xué)習(xí)(英文版.第3版) 內(nèi)容簡(jiǎn)介
神經(jīng)網(wǎng)絡(luò)是計(jì)算智能和機(jī)器學(xué)習(xí)的重要分支,在諸多領(lǐng)域都取得了很大的成功。在眾多神經(jīng)網(wǎng)絡(luò)著作中,影響*為廣泛的是simon haykin的《神經(jīng)網(wǎng)絡(luò)原理》(第4版更名為《神經(jīng)網(wǎng)絡(luò)與機(jī)器學(xué)習(xí)》)。在本書(shū)中,作者結(jié)合近年來(lái)神經(jīng)網(wǎng)絡(luò)和機(jī)器學(xué)習(xí)的*新進(jìn)展,從理論和實(shí)際應(yīng)用出發(fā),全面。系統(tǒng)地介紹了神經(jīng)網(wǎng)絡(luò)的基本模型、方法和技術(shù),并將神經(jīng)網(wǎng)絡(luò)和機(jī)器學(xué)習(xí)有機(jī)地結(jié)合在一起。
本書(shū)不但注重對(duì)數(shù)學(xué)分析方法和理論的探討,而且也非常關(guān)注神經(jīng)網(wǎng)絡(luò)在模式識(shí)別、信號(hào)處理以及控制系統(tǒng)等實(shí)際工程問(wèn)題中的應(yīng)用。本書(shū)的可讀性非常強(qiáng),作者舉重若輕地對(duì)神經(jīng)網(wǎng)絡(luò)的基本模型和主要學(xué)習(xí)理論進(jìn)行了深入探討和分析,通過(guò)大量的試驗(yàn)報(bào)告、例題和習(xí)題來(lái)幫助讀者更好地學(xué)習(xí)神經(jīng)網(wǎng)絡(luò)。
本版在前一版的基礎(chǔ)上進(jìn)行了廣泛修訂,提供了神經(jīng)網(wǎng)絡(luò)和機(jī)器學(xué)習(xí)這兩個(gè)越來(lái)越重要的學(xué)科的*新分析。
本書(shū)特色
基于隨機(jī)梯度下降的在線學(xué)習(xí)算法;小規(guī)模和大規(guī)模學(xué)習(xí)問(wèn)題。
核方法,包括支持向量機(jī)和表達(dá)定理。
信息論學(xué)習(xí)模型,包括連接、獨(dú)立分量分析(ica),一致獨(dú)立分量分析和信息瓶頸。
隨機(jī)動(dòng)態(tài)規(guī)劃,包括逼近和神經(jīng)動(dòng)態(tài)規(guī)劃。
逐次狀態(tài)估計(jì)算法,包括kalman和粒子濾波器。
利用逐次狀態(tài)估計(jì)算法訓(xùn)練遞歸神經(jīng)網(wǎng)絡(luò)。
富有洞察力的面向計(jì)算機(jī)的試驗(yàn)。
神經(jīng)網(wǎng)絡(luò)與機(jī)器學(xué)習(xí)(英文版.第3版) 目錄
acknowledgements
abbreviations and symbols
glossary
introduction
1 whatis aneuralnetwork?
2 the human brain
3 models of a neuron
4 neural networks viewed as dirccted graphs
5 feedback
6 network architecturns
7 knowledge representation
8 learning processes
9 learninglbks
10 concluding remarks
notes and rcferences
chapter 1 rosenblatt's perceptrou
1.1 introduction
1.2 perceptron
1.3 1he pcrceptron convergence theorem
1.4 relation between the perceptron and bayes classifier for a gaussian environment
1.5 computer experiment:pattern classification
1.6 the batch perceptron algorithm
1.7 summary and discussion
notes and refercnces
problems
chapter 2 model building through regression
2.1 introduction 68
2.2 linear regression model:preliminary considerafions
2.3 maximum a posteriori estimation ofthe parametervector
2.4 relationship between regularized least-squares estimation and map estimation
2.5 computer experiment:pattern classification
2.6 the minimum.description-length principle
2.7 rnite sample—size considerations
2.8 the instrumental,variables method
2 9 summary and discussion
notes and references
problems
chapter 3 the least—mean-square algorithm
3.1 introduction
3.2 filtering structure of the lms algorithm
3.3 unconstrained optimization:a review
3.4 thc wiener fiiter
3.5 ne least.mean.square algorithm
3.6 markov model portraying the deviation of the lms algorithm from the wiener filter
3.7 the langevin equation:characterization ofbrownian motion
3.8 kushner’s direct.averaging method
3.9 statistical lms learning iheory for sinail learning—rate parameter
3.10 computer experiment i:linear ptediction
3.11 computer experiment ii:pattern classification
3.12 virtucs and limitations of the lms aigorithm
3.13 learning.rate annealing schedules
3.14 summary and discussion
notes and refefences
problems
chapter 4 multilayer pereeptrons
4.1 introductlon
4.2 some preliminaries
4.3 batch learning and on.line learning
4.4 the back.propagation algorithm
4 5 xorproblem
4.6 heuristics for making the back—propagation algorithm perfoitn better
4.7 computer experiment:pattern classification
4.8 back propagation and differentiation
4.9 the hessian and lis role 1n on-line learning
4.10 optimal annealing and adaptive control of the learning rate
4.11 generalization
4.12 approximations of functions
4.13 cross.vjlidation
4.14 complexity regularization and network pruning
4.15 virtues and limitations of back-propagation learning
4.16 supervised learning viewed as an optimization problem
4.17 couvolutionai networks
4.18 nonlinear filtering
4.19 small—seale versus large+scale learning problems
4.20 summary and discussion
notes and rcfcreilces
problems
chapter 5 kernel methods and radial-basis function networks
5.1 intreduction
5.2 cover’s theorem on the separability of patterns
5.3 1he interpolation problem
5 4 radial—basis—function networks
5.5 k.mcans clustering
5.6 recursive least-squares estimation of the weight vector
5 7 hybrid learning procedure for rbf networks
5 8 computer experiment:pattern classification
5.9 interpretations of the gaussian hidden units
5.10 kernel regression and its relation to rbf networks
5.11 summary and discussion
notes and references
problems
chapter 6 support vector machines
chapter 7 regularization theory
chapter 8 prindpal-components aaalysis
chapter 9 self-organizing maps
chapter 10 information-theoretic learning models
chapter 11 stochastic methods rooted in statistical mechanics
chapter 12 dynamic programming
chapter 13 neurodynami
神經(jīng)網(wǎng)絡(luò)與機(jī)器學(xué)習(xí)(英文版.第3版) 作者簡(jiǎn)介
Simon Haykin,于1953年獲得英國(guó)伯明翰大學(xué)博士學(xué)位,目前為加拿大McMaster大學(xué)電子與計(jì)算機(jī)工程系教授、通信研究實(shí)驗(yàn)室主任。他是國(guó)際電子電氣工程界的著名學(xué)者,曾獲得IEEE McNaughton金獎(jiǎng)。他是加拿大皇家學(xué)會(huì)院士、IEEE會(huì)士,在神經(jīng)網(wǎng)絡(luò)、通信、自適應(yīng)濾波器等領(lǐng)域成果頗豐,著有多部標(biāo)準(zhǔn)教材。
- >
隨園食單
- >
大紅狗在馬戲團(tuán)-大紅狗克里弗-助人
- >
我與地壇
- >
巴金-再思錄
- >
名家?guī)阕x魯迅:朝花夕拾
- >
詩(shī)經(jīng)-先民的歌唱
- >
中國(guó)歷史的瞬間
- >
莉莉和章魚(yú)