The mistake in the following statement about machine learning is ( )
A. Deep learning is a learning method based on unsupervised feature learning and feature hierarchy.
B. In machine learning, getting good features is the key to successful recognition.
C. The training process of deep learning is mainly divided into two steps: the first step, top-down supervised learning; the second step, top-down unsupervised learning.
Deep learning adopts layer by layer training mechanism.
查看答案
() is not a disadvantage of the A* algorithm
A* algorithm takes up too much memory
B. The search efficiency of the A* algorithm is reduced
C. It is difficult for the A* algorithm to solve the shortest path in the state space search
D. A* algorithm cannot guarantee the optimal solution
The wrong statement about the key technologies of robot vision is()
A. The region-based segmentation algorithm has certain anti-interference ability to noise, and the selection of region characteristics is relatively simple
B. A method based on local feature invariants: it has excellent characteristics in describing local areas of the image and dealing with external noise
C. The detection of obstacles by binocular stereo vision technology is mainly based on the method of stereo vision
D. The performance of microwave radar ranging technology is relatively stable but the cost is high, and the space coverage area is limited, there may be some electromagnetic interference between each other
The application of computer vision in the measurement field includes angle measurement and length measurement.
The similarities between neural network and deep learning are ()
A. The system consists of input layer, hidden layer (multilayer) and output layer. Only the adjacent nodes are connected. Each layer can be regarded as a logistic regression model.
Both of them use BP algorithm to adjust the parameters, that is, using iterative algorithm to train the whole network.
C. During the training, the initial value is set randomly, the output of the current network is calculated, and then the parameters of the previous layers are changed according to the difference between the current output and the real label of the sample until convergence.
D. For a deep network (more than 7 layers), there will be gradient diffusion.