" Artificial Intelligence"related to papers

Abstract:Artificial neural network(ANN) is used to extract scattering parameters and noise parameters of GaAs high electron mobility transistors with different frequency bands and gate widths. Based on the two neural networks, the two groups of scattering parameters and noise parameters are trained and studied respectively. The average relative error and mean square error are obtained by comparing different hidden layers and the number of neurons. It is found that 8-8-6 and 6-4 correspond to the optimal hidden layers and number of neurons of the neural networks with scattering parameters and noise parameters. The test results show that the average relative error of scattering parameters is 2.79%. Compared with the conventional single neural network structure, the average relative error is increased by 31.3%. This shows that the model in this paper has better accuracy and reliability, which shows that this model has higher accuracy and is very suitable for parameter extraction of RF transistors with wide band gap and strong nonlinearity.

Abstract:A new deep learning model based on Seq2Seq and Bi-LSTM is proposed for Chinese text automatic proofreading. Different from the traditional rule-based and probabilistic statistical methods, a Chinese text automatic proofreading model is implemented by adding Bi-LSTM unit and attention mechanism based on Seq2Seq infrastructure improvement. Comparative experiments of different models were carried out through the open data sets. Experimental results show that the new model can effectively deal with long-distance text errors and semantic errors. The addition of Bi-RNN and attention mechanism can improve the performance of Chinese text proofreading model.

Abstract:The traditional method for feature extraction contains limited discriminant features, and the deep learning method need lots of labeled data and it′s time-consuming. This paper presents a method which fuses the deep and shallow features for face recognition. Firstly, the HOG feature is extracted from each images and the dimensionality reduction is followed; and the PCANet feature is extracted simultaneously and its′ dimension is reduced. Secondly, the fusion of the two types of features is conducted and discriminant features are extracted further. Finally, the SVM is adopted for classification. Experiments on the AR database verify the effectiveness and robustness of the proposed method.

Abstract:A convolutional neural network(CNN) inference system is designed based on the FPGA platform for the problem that the convolutional neural network infers at low speed and it is power consuming on the general CPU and GPU platforms. By computing resource reusing, parallel processing of data and pipeline design, it greatly improved the computing speed, and reduced the use of computing and storage resources by model compression and sparse matrix multipliers using the sparseness of the fully connected layer. The system uses the ORL face database. The experimental results show that the model inference performance is 10.24 times of the CPU, 3.08 times of the GPU and 1.56 times of the benchmark version at the working frequency of 100 MHz, and the power is less than 2 W. When the model is compressed by 4 times, the system identification accuracy is 95%.

Abstract:Traditional face detection algorithms often cannot extract useful detection features from the original image, and convolutional neural networks can easily extract high-dimensional feature information, which is widely used in image processing. In view of the above shortcomings, a simple and efficient deep learning Caffe framework is adopted and trained by AlexNet network. The data set is LFW face dataset, and a model classifier is obtained. Image pyramid transformation is performed on the original image data, and feature graph is obtained by forward propagation. The inverse transformation yields the face coordinates, uses the non-maximum suppression algorithm to obtain the optimal position, and finally reaches a two-class face detection result. The method can realize face detection with different scales and has high precision, and can be used to construct a face detection system.

Abstract:This paper proposes to apply Transformer model in the field of Chinese text automatic proofreading. Transformer model is different from traditional Seq2Seq model based on probability, statistics, rules or BiLSTM. This deep learning model improves the overall structure of Seq2Seq model to achieve automatic proofreading of Chinese text. By comparing different models with public data sets and using accuracy, recall rate and F1 value as evaluation indexes, the experimental results show that Transformer model has greatly improved proofreading performance compared with other models.

Abstract:Image classification is to distinguish different types of images based on image information. It is an important basic issue in computer vision, and is also the fundamental for image detection, image segmentation, object tracking and behavior analysis. Deep learning is a new field in machine learning research. Its motivation is to simulate the neural network of the human brain for analytical learning. Like the human brain, deep learning can interpret the data of images, sounds and texts. The system is based on the Caffe deep learning framework. Firstly, the data set is trained and analyzed, and a model based on deep learning network is built to obtain the image feature information and corresponding data classification. Then the target image is expanded based on the bvlc-imagenet training set model. And finally,"search an image with an image" Web application is achieved.

Abstract:In order to serve the customer service intelligent dialogue system of the State Grid Customer Service Center, it is necessary to extract knowledge from a large number of documents and traditional knowledge base as well as dialog data. This paper proposes a new knowledge graph framework that integrates fact graph and event evolutionary graph, which can be based on multiple sourcea data. The constructed knowledge graph has good performance in the vertical domain of accurate question and answer, customer service system knowledge support, dialogue management guidance, knowledge reasoning and so on. New knowledge graph was put into use in the customer service center question and answer system, which changed the working mode of the customer service and greatly improved efficiency of the customer service.

Abstract:Aiming at the problem of low correct recognition rate and relying on experience to select parameters in gear box fault diagnosis by using neural network, a fault diagnosis method of gear box based on particle swarm optimization BP network is proposed. In this paper, a fault model is established by extracting characteristic parameters from gear vibration principle. The model takes eigenvector of gear box as input and fault type as output. The fault diagnosis of gear box is realized by BP neural network, probabilistic neural network and particle swarm optimization BP neural network. The simulation results show that the convergence speed of BP neural network for gear box fault diagnosis is slow, and the recognition rate of fault diagnosis is 82%. The recognition rate of probabilistic neural network model fault diagnosis is determined by selecting spreads based on experience, and the maximum recognition rate is 98%. The recognition rate of BP neural network fault diagnosis based on particle swarm optimization is 100% and adaptive ability is strong.

Abstract:This paper investigates the application of convolutional neural network(CNN) in CT image diagnosis of hepatic hydatidosis. Two types of CT images of hepatic hydatid disease were selected for normalization, improved median filtering denoising and data enhancement. Based on LeNet-5 model,an improved CNN model CTLeNet is proposed.Regularization strategy is adopted to reduce overfitting problems, dropout layer is added to reduce the number of parameters, and classification experiments are conducted on the images of dichotomous liver hydatid.Meanwhile, feature visualization is realized through deconvolution to explore the potential features of diseases. The results showed that CTLeNet model achieved good results in the classification task, and it was expected to provide auxiliary diagnosis and decision support for liver hydatidosis through deep learning.

Abstract:The vehicle tracking system on NVIDIA embedded platform Jetson TX2 is designed .Video data in YUV420 format was collected from the onboard camera and sent to the Tegra Parker hardware HEVC encoder for encoding.The output stream is encapsulated by RTP and sent by UDP broadcast.Gstreamer multimedia framework is used to develop the receiving and decoding program. Finally, the acquired video is tracked and displayed dynamically. The Yolo V2 detection algorithm is used to detect the vehicle to provide tracking objects for the tracking system. Using Meanshift method can track the detected vehicles more accurately,and adding Kalman filtering algorithm can predict the position of the target model in the current frame.The system can realize real-time encoding and transmission of ultra-high definition 4K video with frame rate of 60 f/s. The HEVC hardware encoder encoding rate in this system is three orders of magnitude larger than the PC end x265 encoder, and the PSNR is 6 dB higher than the PC end x265 encoder. It′s more suitable for intelligent transportation.

Abstract:The storage capacity of neural networks has always been a major flaw. Its storage is mainly reflected in the weight coefficient. Therefore, it is very difficult to train a neural network with a large amount of parameters. This paper intends to design an external associative memory for the neural network, which can effectively serve the neural network. The input is associated with the query and the result of the query is passed to the neural network as an auxiliary input. In addition, this paper designs a vector embedding model of natural language sentences, and assembles the model and associated memory to form an associative storage system with automatic association statement semantic vectors. The performance indicators of this system meet the design requirements.

Abstract:Aiming at the problem of tracking failure caused by rotation, occlusion and deformation of moving target in video sequence, a tracking method based on multi-region segmentation of target was proposed. The target is divided into multiple overlapping regions, and then multiple regions that are relatively stable in the tracking process are selected for positioning, and then different target region weights are used to update different template updating strategies for the tracked target. In this way, the anti-blocking and anti-rotation ability of the algorithm can be increased. Experimental results show that the proposed method is adaptive to occlusion and rotation.

Abstract:Parameter prediction of insulated gate bipolar transistor(IGBT) can effectively avoid the economic loss and safety problem caused by its failure. Based on the analysis of IGBT parameters, the paper designs a SoC hardware system of IGBT parameters prediction based on LSTM network. The system uses ARM processor as the general controller to control the call of each sub-module and the transmission of data. In the FPGA, the algorithm of matrix vector inner product is optimized to improve the data operation speed in the LSTM network.And the polynomial approximation method reduces the resources occupied by the activation function. The experimental results show that the average prediction accuracy of the system is 92.6%, the calculation speed is 3.74 times faster than the CPU, and the system has the characteristics of low power consumption.

Abstract:In recent years, convolutional neural networks(CNN) and recurrent neural networks(RNN) have been widely used in the field of text classification. In this paper, a model of CNN and long short term memory network(LSTM) feature fusion is proposed. Long-term dependence is obtained by replacing the LSTM as a pooling layer, so as to construct a joint CNN and RNN framework to overcome the single convolutional nerve. The network ignores the problem of semantic and grammatical information in the context of words. The proposed method plays an important role in reducing the number of parameters and taking into account the global characteristics of text sequences. The experimental results show that we can achieve the same level of classification performance through a smaller framework, and it can surpass several other methods of the same type in terms of accuracy.

Abstract:In this paper, an entire multifactor model has constructed, based on financial indicators. We improve the prediction of the SVM classification in the multifactor model. The ranking method is used for data preprocessing, then SVM predicts the stock return classification. Finally, the distance from data to the hyperplane is used to improve the classification predict. With this strategy, in constituent stocks of CSI500, the portfolio gains 88.96% accumulated return from 2016Q4 to 2018Q1. Technical analysis moving average(MA) and channel breakout(CB) as trading time strategies can decrease fluctuation and drawdown. High frequent data are used to re-construct the MA strategy and get lower fluctuation. This model provides a new research perspective: SVM character is used for prediction improvement, technical analysis for strategy return.

Abstract:With the development of interactive intelligence technology, dialogue system becomes more and more practical. Unlike general chat-bots, the dialogue system for specific domain is a practical dialogue system with contextual reasoning and based on knowledge. The insurance domain is a typical specific domain. This paper introduces a basic construction method of the dialogue system in insurance related domain, which can help users to construct a dialogue system in a specific domain and scene quickly and practically, and has the ability of promotion and expansion.

Abstract:Identity authentication technology has developed greatly, and there have been various fraudulent means of forging legitimate user information. Aiming at this problem, this paper proposes a deep learning face detection algorithm to analyze the difference between real face and fraud face, decentralize the real face and photo, zca whiten to noise, random rotation and other processing. At the same time, using the convolutional neural network to extract the facial features of the photos, the extracted features are sent to the neural network for training and classification. And the algorithm is verified on the public database NUAA. The experimental results show that the party reduces the calculation complexity and increases the recognition accuracy.

Abstract:Aiming at the problems of higher computational complexity and larger memory requirements of current object detection algorithm, we designed and implemented an FPGA-based deep learning object detection system. We also designed the hardware accelerator corresponding to the YOLOv2-Tiny object detection algorithm, modeled the processing delay of each accelerator module, and describe the design of the convolution module. The experimental results show that it is 5.5x and 94.6x of performance and energy gains respectively when comparing with the software Darknet on an 8-core Xeon server, and 84.8x and 67.5x over the software version on the dual-core ARM cortex-A9 on Zynq. Also, the current design outperforms the previous work in performance.

Abstract:Aiming at the low precision of multi-scale face detection caused by large passenger flow and complicated background in large places such as stations and shopping malls, a multi-scale face detection method based on RefineDet multi-layer feature map fusion is established. Firstly, the first-level network is used for feature extraction and the face position is roughly predicted on the feature maps of different scales. Then, in the second level, the feature pyramid network is used to fuse the low-level features and the high-level features together to further enhance the semantics of small-sized faces information. Lastly, the detection box is secondarily suppressed by the confidence and focal loss function to achieve accurate return of the border. In the experiment, the aspect ratio between the width and the height of the face candidate region is only set to 1:1 in order to reduce the amount of calculation and improve the face detection accuracy. Experimental results on Wider Face datasets show that the method can effectively detect different scales of human faces, and the test results of MAP(mean average precision) on the three sub-data sets of Easy, Medium and Hard are 93.4%, 92% and 84.4% respectively, in particular, the detection accuracy of small-sized human faces is significantly improved.

Abstract:The multi-instance multi-label learning framework is a new machine learning framework for solving ambiguity problems. In the multi-instance multi-label learning framework, an object is represented by a set of examples and is associated with a set of category labels. The E-MIMLSVM+ algorithm is a classical classification algorithm that uses degenerate ideas in the multi-instance multi-label learning framework. It can′t use unlabeled samples to learn and cause poor generalization ability. This paper uses semi-supervised support vector machine to implement the algorithm. The improved algorithm can use a small number of labeled samples and a large number of unlabeled samples to learn, which helps to discover the hidden structure information inside the sample set and understand the true distribution of the sample set. It can be seen from the comparison experiment that the improved algorithm effectively improve the generalization performance of the classifier.

Abstract:Time series prediction is an important part of abnormal detection of key performance indicators in data centers. For the time series, the wavelet basis function is used as the implicit layer node transfer function to construct the wavelet neural network for prediction. At the same time, the momentum gradient descent method is adopted to improve the learning efficiency of the neural network. Then the optimal solution is trained according to the particle swarm algorithm as the initial neural network parameters. The value is finally simulated using MATLAB, and the time series of key performance indicators are predicted with higher accuracy.

Abstract:For a long time, all kinds of traffic accidents have seriously affected people′s life,property safety and social and economic development. Traffic accident analysis is the investigation and study of traffic accident data. It finds out the pattern of accident trends and various influencing factors on the overall accidents and researches the relationship between them, so as to quantitatively understand the nature and internal law of accident phenomena. Based on the analysis of the text data recorded in traffic accidents, this paper proposes a text topic extraction model and technology to find drivers′ risk factors in traffic accidents,in order to solve the problem that traffic violations are difficult to excavate in the past, and to calculate the most dominant factors that affecting traffic accidents. Finally, taking the traffic accidents in Beijing as an example, combining with the experience of traffic management experts, the effectiveness of the proposed model is verified. It turns out that the model is valid, and the conclusion with using it is consistent with the long-term management experience.

Abstract:Aiming at the performance degradation of objective methods for image quality assessment in practical application scenarios, a visual saliency and complementary features method based on pooling of structure and energy by integrating human visual characteristics into many parts of image feature processing is proposed. Firstly, the three complementary features of image gray energy, contrast energy and gradient structure are processed based on spatial-frequency joint transformation, according to the human eye characteristics. Secondly, multichannel information of the above three layers of visual feature is extracted and assessed, respectively. Finally, the visual feature assessment of each layer is adaptively pooled from the inner layer to the outer layer based on visual characteristics and image distortion. The experiments show that the proposed method holds higher level, better stability, and assessment performance is improved in practical application scenarios.

Abstract:In order to verify the assume that stock price movement is similar to the past,pricing movement is simply dividend into up and down by K-Nearest Neighbor algorithm for forecasting. Sliding window method is used for comparing which historical period is more similar to the current in data feature. Multiple KNN models construct ensemble models for the strategy generalization and return adjustment. The CSI500 price is used for verification. With the predication, single KNN model wins 76.72% return with fee return from 2017 to Sep. 2018,remote historical period is more similar to the current in data feature,and ensemble models are better in risk control. This model verifies the stock price is similar with K-Nearest Neighbor character, which could be used as an investment timing strategy.

Abstract:Face recognition technology is an important research field for deep learning. In order to overcome the shortcomings of traditional open-loop face cognition mode and deep neural network structure, and to imitate human cognition model of real-time evaluation of cognitive results to self-optimized regulate feature space and classification cognition criteria, drawing on the theory of closed-loop control theory, this paper explores an intelligent face cognition method with deep ensemble learning and feedback mechanism. Firstly, based on the DEEPID neural network, an unstructured feature space of face images with a determined mapping relationship from the global to the local is established. Secondly, based on feature separability evaluation and variable precision rough set theory, a face cognition decision information system model with unstructured dynamic feature representation is established from the perspective of information theory, to reduce the unstructured feature space. Thirdly, the ensemble random vector functional-link net is used to construct the classification criterion of the reduced unstructured feature space. Finally, the face cognition result entropy measure index is constructed to provide a quantitative basis for the self-optimization adjustment mechanism of face feature space and classification cognition criteria. The experimental results show that the proposed method can effectively improve the recognition rate of face images compared with the existing methods.

Abstract:With the development of computer technology, fire image processing technology combining computer vision, machine learning, deep learning and other technologies has been widely studied and applied. Aiming at the complex preprocessing process and high false positive rate of traditional image processing methods, this paper proposes a method based on deep convolutional neural network model for fire detection, which reduces complex preprocessing links and integrates the whole fire identification process into one single depth neural network for easy training and optimization. In view of the problem of fire detection caused by similar fire scenes in the identification process, this paper uses the motion characteristics of fire to innovatively propose the combination of fire frame position changes before and after the fire video to eliminate the interference of lights and other similar fire scenes. After comparing many open learning open source frameworks, this paper chooses Caffe framework for training and testing. The experimental results show that the method realizes the recognition and localization of fire images. This method is suitable for different fire scenarios and has good generalization ability and anti-interference ability.

Abstract:In the process of approaching the landing, the instrument landing system(ILS) is vulnerable to the external environment and airspace, resulting in the problem of reduced navigation accuracy. This paper proposes an inertial navigation system(INS) and GBAS landing system(GLS). The improved combined navigation algorithm uses the difference between the output position information of the integrated navigation system as the measured value of the improved unscented Kalman filter(UKF) of the BP neural network, and obtains the global optimality estimated value of the system through the optimal weighting method. Compared with the traditional federated filtering algorithm, the proposed algorithm can effectively reduce the measurement noise, reduce the error when the aircraft approaches the landing, and improve the navigation accuracy.

Abstract:With the rapid development of Internet applications and the rapid growth of users , the reviews and opinions of stock market largely reflect the quotation of the stock market,simultaneously it affects the ups and downs of the stock market. Therefore, how to quickly and efficiently analyze the attitudes and opinions of netizens to the stock market,which,this question plays important role in guiding us to predict the stock market. The thesis studies the rising and falling trend of stocks by analyzing the emotional polarity of different professional issuing stocks. This paper proposes a sentiment analysis method based on a dictionary of consistent integrated financial phrases and weighted at the end of paragraph, which can solve the dependency problem of sentiment dictionary on the domain,and it can effectively improve the accuracy of sentiment analysis. In addition, this paper also proposed a windowed stock prediction model, which can be used to analyze the optimal value of the forecast event window. The experimental results shows that it will be better to predict the rising or falling trend of a particular stock just based on the stock market sentiment analysis.

Abstract:Automatic modulation recognition of the multi-system communication signals based on feature extraction and pattern recognition is an important research topic in the field of software radio. It′s one of the key technologies for a complex electromagnetic environment in the field of non-cooperative communications, such as spectrum management, spectrum detection. A new algorithm for communication signals automation modulation recognition based on deep learning is proposed in this paper. It utilizes the autoencoders for feature extraction to obtain feature set with high anti-interference ability, then classifies and identifies the selected features with BP neural network. The algorithm can realize the automatic identification for MQAM communication signal modulation. Simulation results demonstrate that the propsoed algorithm has a good performace in classification and recognition, meanwhile effectively improving the anti-interference ablility of the automatic identification of the digital modulation signal.

Abstract:Aiming at the problem that the national network customer telephone voice recognition has poor recognition of core words in specific fields, this paper proposes a method based on HCLG domain weight enhancement and domain word correction, which can add domain words in real time and quickly, to dynamically optimize the language model and improve speech recognition. The model and algorithm are optimized in the various fields of the telephone voice consultation, maintenance, complaints, etc. of the State Grid Customer Service Center. The speech recognition results have been greatly improved.

Abstract:The maturity of 4G network technology makes users′ business demand for operators higher and higher. How to maintain users and cater to users′ business needs through the study of user attributes, establish convenient and fast experience service means, and build maintenance and retention system is the most important thing for the future development of China′s telecommunication operators. This paper firstly analyzes the current situation of mobile users to maintain the development, and puts forward the user maintaining development attributes. Secondly, a data mining method based on data mining is used to analyze the data mining model, which is based on the user′s stability and user value. Finally, further prospects are put forward on how to carry out multi-channel precise push for stock maintenance.

Abstract:Since the rain line stripes in the image have different shapes and sizes and are unevenly distributed, the rain density of the single neural network learning uneven distribution is weak, and the rain removal effect is not significant. This paper proposes a rain density sensing guide expansion network to remove rain from a single images. The network is divided into two parts. The first part is the rain density perception network classifying the images of different density rains(Heavy rain, Medium rain, Light rain). The second part is the expansion network guided by the joint rain density perception classification information learning different rain density characteristics details for detecting rain lines and removing rain. Experiments show the effectiveness of the method in the de-rain on synthetic and real data sets.

Abstract:The deep neural network is similar to the biological neural network, so it has the ability of high efficiency and accurate extraction of the deep hidden features of information, can learn multiple layers of abstract features, and can learn more about cross-domain, multi-source and heterogeneous content information. This paper presents an extraction feature based on multi-user-project combined deep neural network, self-learning and other advantages to achieve the model of personalized information. This model does deep neural network self-learning and extraction based on the input multi-source heterogeneous data characteristics,fuses collaborative filtering wide personalization to generate candidate sets, and then through two times of model self-learning produces a sort set. Finally,it can achieve accurate, real-time, and personalized recommendations. The experimental results show that the model can self-learn and extract the user′s implicit feature well, and it can solve the problems of sparse and new items of traditional recommendation system to some extent, and realize more accurate, real-time and personalized recommendation.

Abstract:Taking mobile robot visual navigation as the application background, an improved ORB algorithm is proposed to solve the problems of feature points unevenly distributing and too many redundant features in visual SLAM. Firstly, the scale-space pyramid of each image is meshed to increase the scale information. Secondly, feature points are detected, using improved FAST corner points adaptive extraction and setting region of interest. Thirdly, non-maximum suppression method is used to suppress the output low threshold feature points. Finally, feature points variance values based on region image is used to evaluate of distribution feature points in images. Experiments verify that the improved ORB algorithm has more uniform distribution, fewer output overlapping feature points and shorter run time.

Abstract:It is difficult for a single robot to perform tasks in a complex environment, so Unmanned Air/Ground Vehicle(UAV/UGV) cooperative systems have been widely concerned. In order to improve the efficiency of UAV/UGV cooperative systems, a global path planning for UGV under the target recognized by UAV was proposed. Firstly, SURF algorithm was studied in identify targets and image segmentation was applied to build a map. Then, the optimized A* algorithm was proposed in global path planning for UGV based on the information acquired by UAV. Finally, simulations were performed in a typical rescue scenario. Experiments show that SURF algorithm can achieve the accuracy, real-time and robustness of target recognition. The optimized A* algorithm can achieve the feasibility and real-time of global path planning.

Abstract:The goal of image coloring is to assign color to each pixel of the grayscale image, which is a hot topic in the field of image processing. U-Net is used as the main line network, and a fully automatic coloring network model is designed based on deep learning and convolutional neural networks. In this model, the branch line uses the convolutional neural network SE-Inception-ResNet-v2 as a high-level feature extractor to extract the global information of the image, and the Power Linear Unit(PoLU) function is used to replace the Rectified Linear Unit(ReLU) function in the network. The Experimental results show that this coloring network model can effectively color grayscale images.

Abstract:This paper propose a finger vein recognition algorithm based on the CapsNets(Capsule Network for short) to solve the problem of the information loss of the finger vein in the Convolution Neural Network(CNN). The CapsNets is transferred from the bottom to the high level in the form of capsule in the whole learning process, so that the multidimensional characteristics of the finger vein are encapsulated in the form of vector, and the features will be preserved in the network, but not in the network after the loss is recovered. In this paper, 60 000 images are used as training set, and 10 000 images are used as test set. The experimental results show that the network structure features of CapsNets are more obvious than that of CNN, the accuracy of VGG is increased by 13.6%, and the value of loss converges to 0.01.

Abstract:This paper designs a real-time recognition hardware system framework based on deep learning. The system framework uses Keras to complete the training of the convolutional neural network model and extracts the parameters of the network. Using the FPGA+ARM software and hardware coordination method of the ZYNQ device, ARM was used to complete the acquisition, preprocessing and display of real-time image data. Through the FPGA,the hardening of the convolutional neural network is performed and the image is recognized, and the recognition result is sent to the upper computer for real-time display. The system framework uses MNIST and Fashion MNIST data sets as network model hardening test samples. The experimental results show that the system framework can display and identify image data in real time and accurately under the general scene. And it has the characteristics of high portability, fast processing speed and low power consumption.

Abstract:The recognition of handwritten digits is an important part of the artificial intelligence recognition system. Due to the difference in individual handwritten numbers, the existing recognition system has a lower accuracy rate. This paper is based on the TensorFlow deep learning framework to complete the recognition and application of handwritten numbers. Firstly, the Softmax and Convolutional Neural Network(CNN) model structure is established and analyzed. Secondly, deep learning is performed on 60 000 samples of the handwritten data set MNIST, and then 10 000 samples are tested and compared. Finally, the optimal model is transplanted to the Android platform for application. Compared with the traditional Softmax model, the recognition rate based on TensorFlow deep learning CNN model is as high as 99.17%, an increase of 7.6%, which provides certain scientific research value for the development of artificial intelligence recognition system.

Abstract:In order to improve the problem of low accuracy in human behavior recognition task, a neural network based on batch normalization convolution neural network(CNN) and long short-term memory(LSTM) neural network is proposed. The CNN part introduces the idea of batch normalization, and the training data of the input network are normalized in mini-batch. After full connection, they are sent to long short-term memory neural network. The algorithm adopts the space-time dual stream network model structure. The RGB image of video data is taken as spatial stream network input, and the optical flow field image is taken as time flow network input. Then the recognition results obtained by the time-space dual-stream network are combined in a certain proportion to obtain the final behavior recognition result. The experimental results show that the space-time dual stream neural network algorithm designed in this paper has a high recognition accuracy in human behavior recognition tasks.

Abstract: This paper analyses the hot point about the “Belt and Road” initiative of American mainstream news media and studies the sentiment of related public opinion. Web crawler is used to automatically collect relevant news and filter high-frequency words to get media attention hotspots. An integrated model of automatic summary-convolutional neural network(CNN) is proposed for document-level sentiment analysis. The model firstly extracts the abstraction to remove the interference of non-important data in the original document, then the convolutional neural network is used to analyze the sentence-level sentiment, obtain the document-level emotional score based on the semantic pointing method, and the emotional fluctuation abnormal articles are analyzed twice. Contrastive experiments on real data shows that the automatic summary-CNN integrated document-level sentiment analysis model is superior to the single CNN method in sentiment analysis.

Abstract: In order to improve the problem of low accuracy in human behavior recognition task, a neural network based on batch normalization convolution neural network(CNN) and long short-term memory(LSTM) neural network is proposed. The CNN part introduces the idea of batch normalization, and the training data of the input network are normalized in mini-batch. After full connection, they are sent to long short-term memory neural network. The algorithm adopts the space-time dual stream network model structure. The RGB image of video data is taken as spatial stream network input, and the optical flow field image is taken as time flow network input. Then the recognition results obtained by the time-space dual-stream network are combined in a certain proportion to obtain the final behavior recognition result. The experimental results show that the space-time dual stream neural network algorithm designed in this paper has a high recognition accuracy in human behavior recognition tasks.

Abstract: The recognition of handwritten digits is an important part of the artificial intelligence recognition system. Due to the difference in individual handwritten numbers, the existing recognition system has a lower accuracy rate. This paper is based on the TensorFlow deep learning framework to complete the recognition and application of handwritten numbers. Firstly, the Softmax and Convolutional Neural Network(CNN) model structure is established and analyzed. Secondly, deep learning is performed on 60 000 samples of the handwritten data set MNIST, and then 10 000 samples are tested and compared. Finally, the optimal model is transplanted to the Android platform for application. Compared with the traditional Softmax model, the recognition rate based on TensorFlow deep learning CNN model is as high as 99.17%, an increase of 7.6%, which provides certain scientific research value for the development of artificial intelligence recognition system.

Abstract: This paper designs a real-time recognition hardware system framework based on deep learning. The system framework uses Keras to complete the training of the convolutional neural network model and extracts the parameters of the network. Using the FPGA+ARM software and hardware coordination method of the ZYNQ device, ARM was used to complete the acquisition, preprocessing and display of real-time image data. Through the FPGA,the hardening of the convolutional neural network is performed and the image is recognized, and the recognition result is sent to the upper computer for real-time display. The system framework uses MNIST and Fashion MNIST data sets as network model hardening test samples. The experimental results show that the system framework can display and identify image data in real time and accurately under the general scene. And it has the characteristics of high portability, fast processing speed and low power consumption.

Abstract: This paper propose a finger vein recognition algorithm based on the CapsNets(Capsule Network for short) to solve the problem of the information loss of the finger vein in the Convolution Neural Network(CNN). The CapsNets is transferred from the bottom to the high level in the form of capsule in the whole learning process, so that the multidimensional characteristics of the finger vein are encapsulated in the form of vector, and the features will be preserved in the network, but not in the network after the loss is recovered. In this paper, 60 000 images are used as training set, and 10 000 images are used as test set. The experimental results show that the network structure features of CapsNets are more obvious than that of CNN, the accuracy of VGG is increased by 13.6%, and the value of loss converges to 0.01.

Abstract: The goal of image coloring is to assign color to each pixel of the grayscale image, which is a hot topic in the field of image processing. U-Net is used as the main line network, and a fully automatic coloring network model is designed based on deep learning and convolutional neural networks. In this model, the branch line uses the convolutional neural network SE-Inception-ResNet-v2 as a high-level feature extractor to extract the global information of the image, and the Power Linear Unit(PoLU) function is used to replace the Rectified Linear Unit(ReLU) function in the network. The Experimental results show that this coloring network model can effectively color grayscale images.

Abstract: In this paper, a deep convolution neural network system is designed and implemented by FPGA hardware platform for the problem that the convolution neural network(CNN) in deep learning is slow and time consuming under the CPU platform. The system uses the rectified linear unit(ReLU) as the characteristic output activation function and uses the Softmax function as the output classifier. Assembly line and the parallelism are used for the feature operation of each layer, so that 295 convolution operations in the entire CNN can be completed in one system clock cycle. The system finally uses the MNIST data set as the experimental, and experimental results show that the training time of FPGA work on 50 MHz is 8.7 times higher than that of general-purpose CPU, and the accuracy rate of system identification after 2 000 iterations is 92.42%.

Abstract: The location of acupoints will directly affect the therapeutic effect, so we designed a prediction model of relative coordinates based on particle swarm optimization and neural network(PSO-BP), and then combined with ARM to form a system for locating human acupuncture points. Firstly, PC machine is used for MATLAB simulation training and learning. After that, the optimal weights and thresholds are saved, and the algorithm is embedded in ARM, and online prediction is transformed into offline process. The experimental results show that the BP neural network optimized by particle swarm optimization can effectively improve the local extreme defects. It can be applied to locate the location of the acupoints at the location end, and display the information of the points in LCD. After the control terminal receives the location data, it can perform the movement operation on the motor.

Abstract: Based on the second generation Artificial Intelligence Learning System(TensorFlow), this paper constructs a neural network to detect smoke images, and uses an improved motion detection algorithm to intercept the image of suspected smoke area. Combining with the PCA dimensionality reduction algorithm,the Inception Resnet V2 network model is trained to recognize the smoke characteristics under the TensorFlow platform. The algorithm realizes a large range of real-time fire detection alarm, and through experiments, it is proved that the whole detection process accurately identifies the smoke region in the video stream, which is more accurate and adaptive than the traditional smoke recognition method, and provides an effective scheme for the large range of fire smoke alarms.

Abstract: This paper proposes a design scheme for chest X-ray images analysis by using embedded technology and deep learning technology. The hardware platform of the analysis system using NIVIDIA′s Jetson TX2 as the core board, equipped with Ethernet modules, WiFi modules and other functional modules. It uses the MobileNets convolutional neural network on GPU server to train the marked chest X-ray image dataset then transplants the trained model to the Jetson TX2 core board, detecting the symptoms of pleural effusion, infiltration, emphysema, pneumothorax and atelectasis on the embedded platform. The chest X-ray image data provided by the National Institutes of Health(NIH) were tested in the trained model. Experiments have shown that this method gets higher accuracy and requires less time than other methods.

Abstract: This paper proposes a convolution neural network(CNN) for image classification, which uses overlap pooling and dropout technology to solve overfitting problem. Compared with traditional CNN,the proposal obtains better results on the CIFAR-10 dataset,where the accuracy on testing data set is about 9 percent higher than that on the training data set.

Abstract: A system of smart seeing glasses based on machine vision was proposed and designed in this work. Using Samsung Cortex-A8 architecture S5PV210 as the central processor, running on the Linux system, equipping six core modules of binocular acquisition, GPS, voice broadcast, GSM SMS, voice calls and wireless transmission were equipped to build smart seeing glasses systems hardware platform. Then after completing the target scene identification on a remote cloud server through deep learning algorithm, at last, the accurate voice guide for the blind walking in real time was implemented actually. The system test results show that the smart glasses system is not only able to make the right travel guide for the blind, it also has a certain ability to identify simple objects, which can help the blind make a simple items classification. In addition, this system also has GPS positioning, voice calls, GSM SMS and many other auxiliary functions.

Abstract: The great use of the Unmanned Aerial Vehicle(UAV) brings convenience to people, but also causes some bad effects. For instance, the UAV fly into No-fly zone which results in safety problem, and violate civil privacy due to the inappropriate use. Therefore, a UAV police system is needed to implement supervision on UAVs to contain flying randomly. Traditional identification method is used, it will cause the insufficient in flexibility and precision. This paper studies a UAV recognition algorithm based on deep learning, this method will obtain an efficient model of cognition and accomplish the classification of UAVs and non-UAVs through training a learning network modified by Convolutional Neural Networks(CNNs). The model test result shows that this method has higher expandability and the recognition rate.

Abstract: Grasping points for industrial robot of the existing production lines are fixed and the artifacts only can be placed with a fixed posture and in a fixed position. The complex industrial production requirements are hard to be satisfied with this assembly model and it is inefficient. The SCARA automatic assembly system based on vision guided is designed to improve the original system. The machine vision system is designed to realize the function of rapid identification, location and attitude determination of the artifacts. The assembly system is designed to achieve the function of precision grasping and placement of the artifacts. The image processing algorithm is actualized by the MFC program of Visual Studio and the coordinate and attitude data are sent to SCARA. The good stability and rapidity of this system are proved by the experiment results. The production requirements can be satisfied and the productivity is improved significantly by this system.