Please use this identifier to cite or link to this item:
Title: Avaliação automática de questões discursivas usando LSA
Authors: FAVERO, Eloi Luiz
Keywords: Processamento de linguagem natural (Computação)
Linguística computacional
Método de ensino
Processamento de dados
Ensino por computador
Latent Semantic Analysis (LSA)
Aprendizado do computador
Ambientes Virtuais de Aprendizagem (AVAs)
Interfaces de usuário (Sistema de computador)
Tecnologia educacional
Interação homem-máquina
Issue Date: 5-Feb-2016
Publisher: Universidade Federal do Pará
Citation: SANTOS, João Carlos Alves dos. Avaliação automática de questões discursivas usando LSA. 2016. 117 f. Tese (Doutorado) - Universidade Federal do Pará, Instituto de Tecnologia, Belém, 2016. Programa de Pós-Graduação em Engenharia Elétrica.
Abstract: This work investigates the use of a model using Latent Semantic Analysis (LSA) In the automatic evaluation of short answers, with an average of 25 to 70 words, of questions Discursive With the emergence of virtual learning environments, research on Automatic correction have become more relevant as they allow the mechanical correction With low cost for open questions. In addition, automatic Feedback and eliminates manual correction work. This allows you to create classes With large numbers of students (hundreds or thousands). Evaluation research Texts have been developed since the 1960s, but only in the The current decade are achieving the necessary accuracy for practical use in teaching. For end users to have confidence, the research challenge is to develop Evaluation systems that are robust and close to human evaluators. despite Some studies point in this direction, there are still many points to be explored In the surveys. One point is the use of bigrasms with LSA, even if it does not contribute Very much with the accuracy, contributes with the robustness, that we can define as reliability2, Because it considers the order of words within the text. Seeking to perfect an LSA model In the direction of improving accuracy and increasing robustness we work in four directions: First, we include word bigrasms in the LSA model; Second, we combine models Co-occurrence of unigram and bigrams using multiple linear regression; third, We added a stage of adjustments on the LSA model score based on the Number of words of the responses evaluated; Fourth, we performed an analysis of the Of the scores attributed by the LSA model against human evaluators. To evaluate the We compared the accuracy of the system against the accuracy of human evaluators Verifying how close the system is to a human evaluator. We use a LSA model with five steps: 1) pre-processing, 2) weighting, 3) decomposition a Singular values, 4) classification and 5) model adjustments. For each stage it was explored Strategies that influenced the final accuracy. In the experiments we obtained An 84.94% accuracy in a comparative assessment against human Correlation among human specialists was 84.93%. In the field studied, the Evaluation technology had results close to those of the human evaluators Showing that it is reaching a degree of maturity to be used in Assessment in virtual learning environments. Google Tradutor para empresas:Google Toolkit de tradução para appsTradutor de sitesGlobal Market Finder.
Appears in Collections:Teses em Engenharia Elétrica (Doutorado) - PPGEE/ITEC

Files in This Item:
File Description SizeFormat 
Tese_AvaliacaoAutomaticaQuestoes.pdf4,99 MBAdobe PDFView/Open

This item is licensed under a Creative Commons License Creative Commons