Navegando por Assunto "Processamento de linguagem natural (Computação)"
Agora exibindo 1 - 2 de 2
- Resultados por página
- Opções de Ordenação
Item Acesso aberto (Open Access) Abordagem para o desenvolvimento de um etiquetador de alta acurácia para o Português do Brasil(Universidade Federal do Pará, 2011-10-21) DOMINGUES, Miriam Lúcia Campos Serra; FAVERO, Eloi Luiz; http://lattes.cnpq.br/1497269209026542Part-of-speech tagging is a basic task required by many applications of natural language processing, such as parsing and machine translation, and by applications of speech processing, for example, speech synthesis. This task consists of tagging words in a sentence with their grammatical categories. Although these applications require taggers with greater precision, the state of the art taggers still achieved accuracy of 96 to 97%. In this thesis, corpus and software resources are investigated for the development of a tagger with accuracy above of that of the state of the art for the Brazilian Portuguese language. Based on a hybrid solution that combines probabilistic tagging with rule-based tagging, the proposed thesis focuses on an exploratory study on the tagging method, size, quality, tag set, and the textual genre of the corpora available for training and testing, and evaluates the disambiguation of new or out-of-vocabulary words found in texts to be tagged. Four corpora were used in experiments: CETENFolha, Bosque CF 7.4, Mac-Morpho, and Selva Científica. The proposed tagging model was based on the use of the method of transformation-based learning (TBL) to which were added three strategies combined in a architecture that integrates the outputs (tagged texts) of two free tools, Treetagger and -TBL, with the modules that were added to the model. In the tagger model trained with Mac-Morpho corpus of journalistic genre, tagging accuracy rates of 98.05% on Mac-Morpho test set and 98.27% on Bosque CF 7.4 were achieved, both of journalistic genres. The performance of the proposed hybrid model tagger was also evaluated in the texts of Selva Científica Corpus, of the scientific genre. Needs of adjustments in the tagger and in corpora were identified and, as result, accuracy rates of 98.07% in Selva Científica, 98.06% in the text set of Mac-Morpho, and 98.30% in the texts of the Bosque CF 7.4 have been achieved. These results are significant because the accuracy rates achieved are higher than those of the state of the art, thus validating the proposed model to obtain a more reliable part-of-speech tagger.Item Acesso aberto (Open Access) Avaliação automática de questões discursivas usando LSA(Universidade Federal do Pará, 2016-02-05) SANTOS, João Carlos Alves dos; FAVERO, Eloi Luiz; http://lattes.cnpq.br/1497269209026542This work investigates the use of a model using Latent Semantic Analysis (LSA) In the automatic evaluation of short answers, with an average of 25 to 70 words, of questions Discursive With the emergence of virtual learning environments, research on Automatic correction have become more relevant as they allow the mechanical correction With low cost for open questions. In addition, automatic Feedback and eliminates manual correction work. This allows you to create classes With large numbers of students (hundreds or thousands). Evaluation research Texts have been developed since the 1960s, but only in the The current decade are achieving the necessary accuracy for practical use in teaching. For end users to have confidence, the research challenge is to develop Evaluation systems that are robust and close to human evaluators. despite Some studies point in this direction, there are still many points to be explored In the surveys. One point is the use of bigrasms with LSA, even if it does not contribute Very much with the accuracy, contributes with the robustness, that we can define as reliability2, Because it considers the order of words within the text. Seeking to perfect an LSA model In the direction of improving accuracy and increasing robustness we work in four directions: First, we include word bigrasms in the LSA model; Second, we combine models Co-occurrence of unigram and bigrams using multiple linear regression; third, We added a stage of adjustments on the LSA model score based on the Number of words of the responses evaluated; Fourth, we performed an analysis of the Of the scores attributed by the LSA model against human evaluators. To evaluate the We compared the accuracy of the system against the accuracy of human evaluators Verifying how close the system is to a human evaluator. We use a LSA model with five steps: 1) pre-processing, 2) weighting, 3) decomposition a Singular values, 4) classification and 5) model adjustments. For each stage it was explored Strategies that influenced the final accuracy. In the experiments we obtained An 84.94% accuracy in a comparative assessment against human Correlation among human specialists was 84.93%. In the field studied, the Evaluation technology had results close to those of the human evaluators Showing that it is reaching a degree of maturity to be used in Assessment in virtual learning environments. Google Tradutor para empresas:Google Toolkit de tradução para appsTradutor de sitesGlobal Market Finder.