Most research articles begin by indicating that the research field or topic is very useful or significant. They may focus on____________.
A. the quantity of research in this area
B. how useful research in this area can be
C. simply how important this research field is
D. all of the above
查看答案
What type of abstracts does the following abstract belong to? This paper presents a consensus-based robust cooperative control framework for a wide class of linear time-invariant (LTI) systems, namely Negative-Imaginary (NI) systems. Output feedback, dynamic, Strictly Negative-Imaginary (SNI) controllers are applied in positive feedback to heterogeneous multiinput–multi-output (MIMO) plants through the network topology to achieve robust output feedback consensus. Robustness to external disturbances and model uncertainty is guaranteed via NI system theory. Cooperative tracking control of networked NI systems is presented as a corollary of the derived results by adapting the proposed consensus algorithm. Numerical examples are also given to demonstrate the effectiveness of proposed robust cooperative control framework. (Wang et al. 2015:64) Reference: Jianan Wang, Alexander Lanzon, Ian R. Petersen Robust cooperative control of multiple heterogeneous Negative-Imaginary systems Automatica 61 (2015) 64–72.
A. The BPMRC/D model
B. The IMRC/D model
C. The result-driven abstract
D. The structured abstract
What type of abstracts does the following abstract belong to? We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7% (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5 absolute improvement), outperforming human performance by 2.0. (Ming et al. 2018)Reference: Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. NAACL-HLT (1) 2019: 4171-4186.
A. The BPMRC/D model
B. The IMRC/D model
C. The result-driven abstract
D. The structured abstract
Choose the incorrect one from the underlined words in the following sentence. Baogan Yihao Reduces Liver Fibrosis in Rats Induced by CC14 and High-Fat Feeding
A. Reduces
B. in Rats Induced
CC14
D. Feeding
Choose the incorrect one from the underlined words in the following sentence. Facilitative Activation with Morphological Families on English Compound Word Recognition
A. with
B. Families
C. on
D. Word Recognition