就业形势不好,因为经济不好,宏观经济以及行业环境都不太好,所以我觉得你要做的更应该是提高自己的技能,求值就是个求期望的过程,要为了得到大的期望,你能做的就是提高自己的概率,也就是让自己在专业方面变得更强大一些,而不是找一个行业招人多的,后者想法会让人一直堕落下去。非科班的问题是基础不好,所以你要做得的是补充自己的基础,深度学习(调参)以后没什么发展,这是我说的,我不是权威,但是研究其工作原理会有前途,但是我们目前的应用主要工作是调参,没人会花钱请你去研究模型的数学原理(Google这些公司除外,我说的是中国公司)人工智能未来会是个非常强的方向,但是不一定是深度学习。计算机视觉,机械这些行业都是强应用行业,机械我不懂,但是计算机视觉我觉得应该是好方向,但是目前国内的水平一般,因为计算机视觉的一个主要特征是替代人类,完成重复工作,而我国目前最不缺的就是人,所以计算机视觉的最近几年的招聘情况我不知道,如果计算机视觉的需求量大的话,这是个非常不错的行业,我理解机械也是个非常有前景的行业,只是热度目前不如人工智能类的这些工作。 希望能帮助你,注意,我的这些都是给你提供信息,而不是强烈建议或者指导,没有一个人的思想能指导别人的行为,只是提供信息,希望你多思考,多浏览, Tony Nov,24,2018
收到的回信原文:
谭老师,您好 多谢老师抽出时间为我指点迷津,正像老师所说的,我应该首先对自己有个清晰的认识,然后再选择自己的发展方向。 现在的社会是资产财富论能力,然而大学入学时大家都差不多,不同的专业进入社会开始拉开差距,生化环材也正是因为这个原因被称为劝退专业,机械则是屌丝专业,个人觉得这些专业的学生并不比CS EE 差,但是赚不到钱的就得在这个社会的下游,作为机械专业的学生,是不甘心吧,当然也是当初自找的,生化环材机友们和所有人一样爱财,也希望自己和家人以后能过得好点,我想我会转互联网/IT吧,如老师所说,大多数人不知道自己喜欢什么,我也是,就专业方面来说,机器人和应用开发 人工智能都挺有好感,但谈不上热衷,大概不是真的喜欢吧,作为生活的一部分,我想我会转互联网,作为爱好追求,我还应该多去了解自己,谢谢老师提点,我以为选择了一个方向就会是生活的全部了,所以害怕决定。 把这个问题公开我觉得挺好的,希望所有有想法的人勇敢一点。
The definitive guide to floating point arithmetic is the IEEE 754-2008 Standard; however, it is not available for free online.
For a brief but lucid presentation of how floating-point numbers are represented, see John D. Cook’s article on the subject as well as his introduction to some of the issues arising from how this representation differs in behavior from the idealized abstraction of real numbers.
For even more extensive documentation of the history of, rationale for, and issues with floating-point numbers, as well as discussion of many other topics in numerical computing, see the collected writings of William Kahan, commonly known as the “Father of Floating-Point”. Of particular interest may be An Interview with the Old Man of Floating-Point.
julia> x = typemin(Int64)-9223372036854775808julia> x = x - 19223372036854775807julia> typeof(x)Int64julia> y = BigInt(typemin(Int64))-9223372036854775808julia> y = y - 1-9223372036854775809julia> typeof(y)BigInt
# Assign the value 10 to the variable xjulia> x = 1010# Doing math with x's valuejulia> x + 111# Reassign x's valuejulia> x = 1 + 12# You can assign values of other types, like strings of textjulia> x = "Hello World!""Hello World!"
julia> x = 1.01.0julia> y = -3-3julia> Z = "My string""My string"julia> customary_phrase = "Hello world!""Hello world!"julia> UniversalDeclarationOfHumanRightsStart = "人人生而自由,在尊严和权利上一律平等。""人人生而自由,在尊严和权利上一律平等。"
1 Introduction to Information Theory 2 Probability, Entropy, and Inference 3 More about Inference
I Data Compression
4 The Source Coding Theorem 5 Symbol Codes 6 Stream Codes 7 Codes for Integers
II Noisy-Channel Coding
8 Dependent Random Variables 9 Communication over a Noisy Channel 10 The Noisy-Channel Coding Theorem 11 Error-Correcting Codes and Real Channels
III Further Topics in Information Theory
12 Hash Codes: Codes for Ecient Information Retrieval 13 Binary Codes 14 Very Good Linear Codes Exist 15 Further Exercises on Information Theory 16 Message Passing 17 Communication over Constrained Noiseless Channels 18 Crosswords and Codebreaking 19 Why have Sex? Information Acquisition and Evolution
IV Probabilities and Inference
20 An Example Inference Task: Clustering 21 Exact Inference by Complete Enumeration 22 Maximum Likelihood and Clustering 23 Useful Probability Distributions 24 Exact Marginalization 25 Exact Marginalization in Trellises 26 Exact Marginalization in Graphs 27 Laplace’s Method 28 Model Comparison and Occam’s Razor 29 Monte Carlo Methods 30 Ecient Monte Carlo Methods 31 Ising Models 32 Exact Monte Carlo Sampling 33 Variational Methods 34 Independent Component Analysis and Latent Variable Modelling 35 Random Inference Topics 36 Decision Theory 37 Bayesian Inference and Sampling Theory
V Neural networks
38 Introduction to Neural Networks 39 The Single Neuron as a Classier 40 Capacity of a Single Neuron 41 Learning as Inference 42 Hopeld Networks 43 Boltzmann Machines 44 Supervised Learning in Multilayer Networks 45 Gaussian Processes 46 Deconvolution
VI Sparse Graph Codes
47 Low-Density Parity-Check Codes 48 Convolutional Codes and Turbo Codes 49 Repeat{Accumulate Codes 50 Digital Fountain Codes
进化方法是我在学习“强化学习”这本书之前认为的在人工智能中必然要有的一个部分,但是本书给了我一盆冷水,本书作者认为进化算法对强化学习的作用不太明显,或者说缺点更多,不适合用作强化学习的方法。 但是我认为AI如果能达成,一定是模拟人或者动物的智慧形成过程的,即使进化方法不是学习技能(learn skills in individual lifetime)的主要方法,但是其对智慧的长期形成一定有非常重要影响,不能因为进化方法不适合强化学习的某些任务就彻底否定他,相反我们要注意他们的结合。 本书在讲述强化学习的过程中主要是围绕 Estimating Value Function展开的,但是Estimating Value Function在强化学习中不是必须的,Estimating Value Function前一篇介绍过https://face2ai.com/RL-RSAB-1-3-Elements-of-RL/。
It is not clear how far back the pendulum will swing, but reinforcement learning research is certainly part of the swing back toward simpler and fewer general principles of artificial intelligence.
这段不翻译了,对于技术科技类的内容:”信”的翻译不能做到 “雅达”,”雅达”的翻译不能做到 “信”
References
Sutton R S, Barto A G. Reinforcement learning: An introduction[J]. 2011.