Teses em Engenharia Elétrica (Doutorado) - PPGEE/ITEC
URI Permanente para esta coleçãohttps://repositorio.ufpa.br/handle/2011/2317
O Doutorado Acadêmico inicio-se em 1998 e pertence ao Programa de Pós-Graduação em Engenharia Elétrica (PPGEE) do Instituto de Tecnologia (ITEC) da Universidade Federal do Pará (UFPA).
Navegar
Navegando Teses em Engenharia Elétrica (Doutorado) - PPGEE/ITEC por CNPq "CNPQ::ENGENHARIAS::ENGENHARIA ELETRICA"
Agora exibindo 1 - 20 de 62
- Resultados por página
- Opções de Ordenação
Item Acesso aberto (Open Access) Abordagem Inteligente com Combinação de Características Estruturais para Detecção de Novas Famílias de Ransomware(Universidade Federal do Pará, 2024-03-22) MOREIRA, Caio Carvalho; SALES JUNIOR, Claudomiro de Souza de; País de Nacionalidade BrasiRansomware is a malicious software that aims to encrypt user files and demand a ransom to unlock them. It is a cyber threat that can cause significant financial damage, as well as compromise privacy and data integrity. Although signature-based detection scanners commonly combat this threat, they fail to identify unknown ransomware families (variants). One method to detect new threats without the need to execute them is static analysis, which inspects the code and structure of the software, along with classification through intelligent approaches. The Detection of New Ransomware Families (DNFR) can be evaluated in a realistic and challenging scenario by categorizing and isolating families for training and testing. Hence, this thesis aims to develop an effective static analysis model for DNFR, which can be applied in Windows systems as an additional security layer to check executable files upon receipt or before execution. Early ransomware detection is essential to reduce the likelihood of a successful attack. The proposed approach comprehensively analyzes executable binaries, extracting and combining various structural features, and distinguishes them between ransomware or benign software employing a soft voting model comprising three machine learning techniques: Logistic Regression (LR), Random Forest (RF), and eXtreme Gradient Boosting (XGB). Results for DNFR demonstrated an average accuracy of 97.53%, precision of 96.36%, recall of 97.52%, and F-measure of 96.41%. Additionally, scanning and predicting individual samples took an average of 0.37 seconds. This performance indicates success in quickly identifying unknown ransomware variants and adapting the model to the constantly evolving landscape, suggesting its applicability in antivirus protection systems, even on resource-limited devices. Therefore, the method offers significant advantages and can assist developers of ransomware detection systems in creating more resilient, reliable, and fast-response solutions.Item Acesso aberto (Open Access) Algoritmo memético cultural para otimização de problemas de variáveis reais(Universidade Federal do Pará, 2019-03-29) FREITAS, Carlos Alberto Oliveira de; SILVA, Deam James Azevedo da; http://lattes.cnpq.br/8540875293894747; OLIVEIRA, Roberto Célio Limão de; http://lattes.cnpq.br/4497607460894318Technology has made great strides in recent years, but computing resources for certain applications need optimization so that the costs involved in solving some problems are not high. There is a very broad area of research for the development of efficient algorithms for multimodal optimization problems. In the last two decades the use of evolutionary algorithms in multimodal optimization has been shown to be a success. Among these evolutionary algorithms, which are global search algorithms, one can cite the use of Cultural Algorithms. A natural enhancement of the Cultural Algorithm is its hybridization with some other local search algorithm, so as to have the advantages of global search combined with local search. However, the local search Cultural Algorithms used for multimodal optimization are not always evaluated by efficient statistical tests. The objective of this work is to analyze the behavior of the Cultural Algorithm, with populations evolved by the Genetic Algorithm, when the local search heuristics are used: Tabu Search, Beam Search, Climbing and Simulated Annealing. One of the contributions of this work was the updating of the topographic knowledge of the cultural algorithm by the use of the triangular area defined by the best results found in the local search. To perform the analysis, a memetic algorithm was developed by hybridizing the cultural algorithm with the local search heuristics mentioned, being selected one at a time. Real world problems usually have multimodal characteristics, so the evaluations were performed using multimodal benchmark functions, which had their results evaluated by non-parametric tests. In addition, the memetic algorithm was tested on real optimization problems with constraints in the engineering areas. In the evaluations carried out, the developed Cultural Algorithm presented better results when compared to the available results of the researched scientific literature.Item Acesso aberto (Open Access) Alocação ótima de geração distribuída em redes de distribuição utilizando algoritmo híbrido baseado em cuckoo search e algoritmo genético(Universidade Federal do Pará, 2018-09-02) OLIVEIRA, Victoria Yukie Matsunaga de; AFFONSO, Carolina de Mattos; http://lattes.cnpq.br/2228901515752720This thesis presents a novel Cuckoo Search (CS) algorithm called Cuckoo-GRN (Cuckoo Search with Genetically Replaced Nests), which incorporates the benefits of genetic algorithm (GA) into the CS algorithm. The proposed method handles the abandoned nests from CS more efficiently by genetically replacing them, significantly improving the performance of the algorithm by establishing optimal balance between diversification and intensification. The algorithm is used for the optimal location and size of distributed generation units in a distribution system, in order to minimise active power losses while improving system voltage stability and voltage profile. The allocation of single and multiple distribution generation units is considered. The proposed algorithm is extensively tested in mathematical benchmark functions as well as in the 33-bus and 119-bus distribution systems. Simulation results show that Cuckoo-GRN can lead to a substantial performance improvement over the original CS algorithm and others techniques currently known in literature, regarding not only the convergence but also the solution accuracy.Item Acesso aberto (Open Access) Análise de desempenho de algoritmos para classificação de sequências representando faltas do tipo curto-circuito em linhas de transmissão de energia elétrica(Universidade Federal do Pará, 2019-12-05) FREIRE, Jean Carlos Arouche; MORAIS, Jefferson Magalhães de; http://lattes.cnpq.br/5219735119295290; CASTRO, Adriana Rosa Garcez; http://lattes.cnpq.br/5273686389382860Maintaining power quality in electrical power systems depends on addressing the major disturbances that may arise in their generation, transmission and distribution. Within this context, many studies have been developed aiming to detect and classify short circuit faults in electrical systems through the analysis of the electrical signal behavior. Transmission line fault classification systems can be divided into two types: online and post fault classification systems. In the post-missing scenario the signal sequences to be evaluated for classification have variable length (duration). In sequence classification it is possible to use conventional classifiers such as Artificial Neural Networks, Support Vector Machine, K-nearest neighboors and Random forest. In these cases, the classification process usually requires a sequence preprocessing or a front end stage that converts the raw data into sensitive parameters to feed the classifier, which may increase the computational cost of the classification system. An alternative to this problem is the FBSC-FrameBased-Sequence Classification (FBSC) architecture. The problem with FBSC architecture is that it has many degrees of freedom in designing the model (front end plus classifier) and it should be evaluated using a complete dataset and rigorous methodology to avoid biased conclusions. Considering the importance of using efficient short-circuit fault classification methodologies and mainly with low computational cost, this paper presents the results of the KNN-DTW (K-Nearest Neighbor) algorithm analysis study associated with Dynamic similarity measurement. Time Warping (DTW) and HMM (Hidden Markov Model) algorithm for fault classification task. These two techniques allow the direct use of data without the need for front ends for signal pre-processing, as well as being able to handle multivariate and variable time series, such as signal sequences for the post-miss case. To develop the two proposed systems for classification, simulated data of short-circuit faults from the UFPAFaults public database were used. To compare results with methodologies already presented in the literature for the problem, the FBSC architecture was also evaluated for the same database. In the case of FBSC architecture, different front ends and classifiers were used. The comparative assessment was performed from the measurement of error rate, computational cost and statistical tests. The results showed that the HMM-based classifier was more suitable for the problem of classification of short circuits on transmission lines.Item Acesso aberto (Open Access) Análise de sistemas não-lineares e síntese de operadores inversos por séries de volterra diagonais(Universidade Federal do Pará, 2019-08-22) TEIXEIRA, Raphael Barros; BAYMA, Rafael Suzuki; http://lattes.cnpq.br/6240525080111166; COSTA JÚNIOR, Carlos Tavares da; http://lattes.cnpq.br/6328549183075122This work proposes innovative strategies for the analysis of nonlinear systems and the synthesis of inverse operators using the Volterra diagonal series. By expressing the output explicitly from the input, the Volterra series enable nonlinear analysis in the frequency domain. However, the multidimensional nature of the model confers several difficulties to its systematic use. This work takes a new look at the so-called Volterra series in diagonal coordinates, in which Volterra operators are expressed as a set of linear and one-dimensional filters that process nonlinear polynomial terms of the input. The proposition of the rational form for these filters leads to exact and compact Volterra models, which exhibit a direct connection with modern nonlinear formalisms, notably the Wiener and Hammerstein block structured models, and the non-linear, autoregressive polynomial models with exogenous input (NARX). In particular, it is proposed a strategy to obtain diagonal Volterra models from the polynomial NARX. The strategy is called derivative method, because it depends only on the established results of the differential calculus. This is important because a NARX model can fit relatively well to experimental data to describe a wide variety of practical systems. A subsequent study through the Volterra series comes as an additional natural step of analysis. This result also opens up possibilities for non-linear synthesis. A problem that has received increasing attention in systems engineering is that of the synthesis of inverse nonlinear operators, through which it tries to reverse distortions generated by the underlying system, preserving the integrity of the information of interest. In this case we propose a strategy of synthesis of Volterra inverse diagonal operators for particular classes of nonlinear polynomial models. It is a numerical approach where the synthesis is driven by an optimization problem that is inspired by the classic inverse p-order operator. Keywords: Non-linear systems, Volterra series diagonals, systems identification, nonlinear analysis, dynamic inversionItem Acesso aberto (Open Access) Análise e modelagem em larga escala para as frequências 8, 9, 10 e 11 ghz em ambientes indoor(Universidade Federal do Pará, 2019-12-06) BATALHA, Iury da Silva; BARROS, Fabrício José Brito; http://lattes.cnpq.br/9758585938727609; CAVALCANTE, Gervásio Protásio dos Santos; http://lattes.cnpq.br/2265948982068382Within the context of studies related to radiopropagation, this work presents a proposal for large-scale modeling of propagation loss for 8, 9, 10 and 11 GHz bands in relation to the number of walls, distance and polarization. Measurement campaigns were conducted in the Annex II corridor and in a teaching laboratory located at the Federal University of Pará. The measurement campaign was performed using VV and HH co-polarized directional horn antennas and VH cross polarization nantennas in Line of Sight (LoS) and Line of Sight (NLoS) conditions, the transmitter was fixed within the environment with 0 dBm transmission with VV and HH antenna array and 15 dBm for VH. Directional antennas for transmitter and receiver with 29.3∘ elevation and 29∘ azimuth were used for frequencies 8, 9, 10 and 11 GHz. The Minium Mean Square Error (MMSE) technique was applied to determine the values of the equation parameters as: PLE , XPD, HHPD, and OPLE. The proposed propagation loss model presented satisfactory results compared to the measured data presenting a low standard deviation. A point-to-point standard deviation analysis is also presented within the two environments for the studied frequencies. For the corridor the standard deviation values using polarized V-V antennas were 7, 7.5, 5.6 and 5 dB, and for cross-polarized V-H antennas were 5, 6.2, 2.3 and 3.5 dB for studies frequencies. For the laboratory the values of standard deviation for polarized V-V antennas were 7, 7, 6.5 and 7.3 dB and for polarized H-H antennas were 9.3, 6.1, 6.1 and 6 dB. The polarization loss factor (XPD) presented in the extension of the CIX model for the corridor present values of 19.3, 28.7, 21.3 and 14.3 for the frequencies of 8, 9, 10 and 11 GHz, respectively.Item Acesso aberto (Open Access) Análise e otimização de coberturas de invisibilidade esféricas estratificadas em camadas homogêneas e isotrópicas(Universidade Federal do Pará, 2012-06-29) MARTINS, Tiago Carvalho; DMITRIEV, Victor Alexandrovich; http://lattes.cnpq.br/3139536479960191In this work, we analyze and optimize invisibility cloaks stratified in concentric spherical homogeneous and isotropic layers, in which both the total scattering cross section and the number of layers have been minimized. In order to increase the range of frequencies in which there is invisibility, dispersive effects are taken into account. In microwaves, We obtained discretized invisibility cloaks (obtained from anisotropic cloaks) with significant reductions (greater than 20 dB) of the total scattering cross section, for only 20 layers (which is achieved in the literature with at least 80 layers). We obtained a reduction of 32 dB in the total scattering cross section for a cloak stratified in only 13 layers. This result was obtained in microwaves. In microwaves, we optimized dispersive invisibility cloaks which present a bandwidth 5.4 times larger than would be obtained by a optimized cloak without dispersive effects. Cloaks are designed to operate in optical frequencies, for a wide range of frequencies.Item Acesso aberto (Open Access) Análise magnética e mecânica em transformadores sob correntes de energização e energização solidária(Universidade Federal do Pará, 2019-10-01) LIMA, Diorge de Souza; BEZERRA, Ubiratan Holanda; http://lattes.cnpq.br/6542769654042813The power transformer is one of the most important equipment in the electric power system, allowing the feasibility of connecting the generating centers to the consumer centers, even over long distances. Reliable and continuous operation is of fundamental importance for service maintenance and is subject to various types of disturbances that can lead to failures. In this perspective, studies of the dynamic behavior of transformer windings through computer simulations have been widely used to safely and accurately evaluate their operation. Therefore, this paper presents the methodology for research on a 50 MVA power transformer using the finite element method for static and time domain analysis. Thus, the study was performed by means of magnetic-mechanical couplings. In the first analysis (circuit study), the ATPDraw software was used to obtain the behavior of the inrush current and solidarity energization during the transformer bank energization. Therefore, in the ANSYS MAXWELL software magnetic studies were performed. For this, a real 3D model was used (taking into account the characteristics of the lamination core and windings, being in disc). Thus, the results of the behavior of magnetic induction and magnetic forces in the windings of the equipment are presented. Finally, in the ANSYS STRUCTURAL software, structural (mechanical) studies were performed. Also, as before, a close-to-real 3D model was used, presenting as results the behavior of the total deformation in the winding, the mechanical stress suffered and the degree of safety during the occurrence of energization. The static studies were considered three operating conditions: nominal condition, sympathetic inrush and inrush current. For the nominal condition, the equipment's plate data was used, for the energizing condition (sympathetic inrush and inrush current) the largest amplitude obtained during the simulation was used. It is noteworthy that for the time domain analysis, only the condition of the inrush current was analyzed, both for the high computational cost required and for being the worst condition in the static analysis.Item Acesso aberto (Open Access) Aplicação de fanets e ca-markov para captura de imagens para o estudo de uso e cobertura da terra em projetos de assentamentos na amazônia(Universidade Federal do Pará, 2019-12-06) SOUZA, Jorge Antonio Moraes de; FRANCÊS, Carlos Renato Lisboa; http://lattes.cnpq.br/7458287841862567Projetos de assentamentos de reforma agrária são uma das medidas adotadas pelo governo na tentativa de criar um relacionamento sustentável com a natureza. Como a área de assentamentos cobre mais de 77.483.317,86 hectares da Amazônia Legal, é essencial compreender as causas da degradação ambiental desses espaços. Isto posto, foram utilizados, de forma combinada, cadeias de Markov e autômatos celulares (CA-Markov) para, a partir de duas imagens classificadas, prever cenários de mudanças no uso e cobertura da terra (LULC). Esta tese apresenta uma metodologia inovadora que difere daquelas usualmente utilizadas em CA-Markov, pois os aspectos de tempo e espaço são observados pela cadeia de Markov e servem como base para a função de transição do autômato celular (CA). A metodologia também contempla a aquisição de imagens, nesse sentido, como a região de interesse permanece, em boa parte do ano, com uma cobertura de nuvens significativa, a obtenção de imagens por sensores ópticos, fica prejudicada, por conta disso, foi imperativa a busca por uma alternativa. As Flying Ad-hoc Networks (FANETs) podem ser utilizadas para complementar informações da região de estudo, capturando imagens de alta qualidade, sem o inconveniente das nuvens. Por outro lado, os nós da rede precisam manter, pelo maior tempo possível, a conexão entre eles, o que é dificultado pela mobilidade e autonomia de voo dos drones. Por esse motivo, é imprescindível a utilização de um protocolo de roteamento que seja capaz de adaptar-se à dinâmica da rede. Além disso, também foi desenvolvido um algoritmo de roteamento baseado em sistema Fuzzy. Testes e simulações foram realizadas com o intuito de validar tanto a metodologia geral MAPS, quanto o protocolo de roteamento.Item Acesso aberto (Open Access) Aplicação de redes neurais artificiais para predição de RSSI e SNR em ambiente de bosque amazônico(Universidade Federal do Pará, 2024-06-11) BARBOSA, Brenda Silvana de Souza; ARAÚJO, Jasmine Priscyla Leite de; http://lattes.cnpq.br/4001747699670004; https://orcid.org/0000-0003-3514-0401; BARROS, Fabrício José Brito; http://lattes.cnpq.br/9758585938727609The presence of green areas in urbanized cities is crucial to reduce the negative impacts of urbanization. However, these areas can influence the signal quality of IoT devices that use wireless communication, such as LoRa technology. Vegetation attenuates electromagnetic waves, interfering with data transmission between IoT devices, resulting in the need for signal propagation modeling that considers the effect of vegetation on its propagation. In this context, this research was conducted at the Federal University of Pará, using measurements in a wooded environment composed of the Pau-Mulato species, typical of the Amazon. Two propagation models based on machine learning, GRNN and MLPNN, were developed to consider the effect of Amazonian trees on propagation, analyzing different factors such as the height of the transmitter relative to the trunk, the beginning of the foliage, and the middle of the tree canopy, as well as the LoRa spreading factor (SF) 12 and the copolarization of the transmitter and receiver antennas. The best models were the machine learning ones, GRNN and MLPNN, which demonstrated greater accuracy, achieving root mean square error (RMSE) values of 3.86 dB and 3.8614 dB, and standard deviation (SD) of 3.8558 dB and 3.8564 dB, respectively. On the other hand, compared to classical models in the literature, the best-performing model was the Floating Intercept (FI) model, with RMSE and SD errors around 7.74 dB and 7.77 dB, respectively, while the FITU-R model had the highest RMSE and SD errors, around 26.40 dB and 9.65 dB, respectively, for all heights and polarizations. Furthermore, the importance of this study lies in its potential to boost wireless communications in wooded environments, as it was observed that even at short distances at heights of 12 m and 18 m, the SNR (Signal-to-Noise Ratio) had lower values due to the influence of the foliage, but it was still possible to send and receive data. Finally, it was shown that vertical polarization achieved the best results for the Amazon forest environment.Item Acesso aberto (Open Access) Avaliação automática de questões discursivas usando LSA(Universidade Federal do Pará, 2016-02-05) SANTOS, João Carlos Alves dos; FAVERO, Eloi Luiz; http://lattes.cnpq.br/1497269209026542This work investigates the use of a model using Latent Semantic Analysis (LSA) In the automatic evaluation of short answers, with an average of 25 to 70 words, of questions Discursive With the emergence of virtual learning environments, research on Automatic correction have become more relevant as they allow the mechanical correction With low cost for open questions. In addition, automatic Feedback and eliminates manual correction work. This allows you to create classes With large numbers of students (hundreds or thousands). Evaluation research Texts have been developed since the 1960s, but only in the The current decade are achieving the necessary accuracy for practical use in teaching. For end users to have confidence, the research challenge is to develop Evaluation systems that are robust and close to human evaluators. despite Some studies point in this direction, there are still many points to be explored In the surveys. One point is the use of bigrasms with LSA, even if it does not contribute Very much with the accuracy, contributes with the robustness, that we can define as reliability2, Because it considers the order of words within the text. Seeking to perfect an LSA model In the direction of improving accuracy and increasing robustness we work in four directions: First, we include word bigrasms in the LSA model; Second, we combine models Co-occurrence of unigram and bigrams using multiple linear regression; third, We added a stage of adjustments on the LSA model score based on the Number of words of the responses evaluated; Fourth, we performed an analysis of the Of the scores attributed by the LSA model against human evaluators. To evaluate the We compared the accuracy of the system against the accuracy of human evaluators Verifying how close the system is to a human evaluator. We use a LSA model with five steps: 1) pre-processing, 2) weighting, 3) decomposition a Singular values, 4) classification and 5) model adjustments. For each stage it was explored Strategies that influenced the final accuracy. In the experiments we obtained An 84.94% accuracy in a comparative assessment against human Correlation among human specialists was 84.93%. In the field studied, the Evaluation technology had results close to those of the human evaluators Showing that it is reaching a degree of maturity to be used in Assessment in virtual learning environments. Google Tradutor para empresas:Google Toolkit de tradução para appsTradutor de sitesGlobal Market Finder.Item Acesso aberto (Open Access) Avaliação da aprendizagem: uma abordagem qualitativa baseada em mapas conceituais, ontologias e algoritmos genéticos(Universidade Federal do Pará, 2007-05-18) ROCHA, Francisco Edson Lopes da; FAVERO, Eloi Luiz; http://lattes.cnpq.br/1497269209026542In the last two decades, the development of areas such as Computer Networks and Artificial Intelligence (AI) has favored the growth of other areas of knowledge, like Education. In this area, new discoveries have changed the focus of research from old behaviorist educational theories to constructivism, leading to a better understanding of how learning occurs. Meaningful Learning (ML) is a constructivist theory in evidence nowadays and the Concept Map (CM) is its main cognitive tool. Additionally, the recent developments on Distance Learning (DL) have made it possible to apply the educational process in a larger scale. In this thesis, automatic learning assessment mediated by concept maps is investigated. This is related to a qualitative approach, named as formative assessment, which is compliant with Bloom’s model, a reference for educational processes - teaching, learning, and learning assessment. The proposal presented in this thesis is seen as an alternative solution to an important issue in the area of Education: how to evaluate learning qualitatively, respecting each student’s cognitive processes? The integration of concept maps, domain ontologies, and genetic algorithms allows for advances in automatic learning assessment and assistance. The paradigm of mere quantitative assessment is broken, and a new approach to gradual and continuous assistance in learning is presented. Following this approach, it is possible to accompany students individually, respecting their idiosyncratic ways of learning, and also to group students based on specific cognitive characteristics or development degrees. This thesis begins a new research area, which can be synthesized as "Automatic qualitative assessment of learning centered in Concept Maps, based on AI techniques: ontologies and genetic algorithms". In this new research area, the thesis originated the following contributions: ² a prototype of an environment designed to aid teaching, learning, and learning assessment, founded upon Meaningful Learning, encompassing a concept map editor, an ontology editor, and an assessment module; ² A proposal concerning the use of genetic algorithms and ontologies in qualitative assessment/ assistance of learning, allowing for: – step-by-step individual assistance; – assistance to groups of students; – comparisons among students. Domain ontologies are generated by the teacher, who uses an ontology editor provided by the environment. They comprise the structural knowledge that must be learned by students before they can manage other forms of knowledge. The genetic algorithm was designed to run in two distinct modes: i) generating multiple CMs to compare with the student’s CM, allowing for learning assessment at any moment of the course; this assessment is relative, centered in a determined number of concepts which represent a partial structure of knowledge domain being studied.; and ii) generating an optimal CM according to the ontology created by the teacher, to permit a complete assessment of the learning of the knowledge domain which was studied. The proposed model was evaluated by the implementation of prototypes for the assessment tool. The genetic algorithm developed uses the ontologies as its search spaces. It emulates meaningful learning cognitive processes, and constructs CMs that can be semantically compared to that of the student. Its fitness function represents a way of measuring distances in the cognitive field, being the measurement unit given by a taxonomy that organizes semantic dimensions and, inside these, linking phrases. This taxonomy is used by teachers when they construct their ontologies, and by students when they construct their concept maps. The main challenges faced in the development of the research reported in this thesis were: 1) definition of a domain ontology model that could be applied to learning assessment; 2) definition of a method and a scale that could be applied to the cognitive domain; and 3) definition of a search mechanism in the ontology in accordance with constructivist theories of learning assessment. The research described in this thesis can be further developed with new functionalities or improvements in functionalities already implemented. Some possibilities are suggested in the end of the thesis, the main of which being the deployment of the environment in the Internet. This thesis has generated 7 (seven) scientific contributions, 1 (one) in a qualis A magazine, 1 (one) in a qualis B magazine, 2 (two) in international congresses, and 3(three) in national congresses. The results of this research advance what has already been attained by the AmAm/UFPA research group, in whose context this thesis is inserted.Item Acesso aberto (Open Access) Avaliação do impacto da produção eólica na reserva operativa de curto e longo prazo utilizando séries temporais(Universidade Federal do Pará, 2019-05-30) SANTOS, Fernando Manuel Carvalho da Silva; BEZERRA, Ubiratan Holanda; http://lattes.cnpq.br/6542769654042813; BRANCO, Tadeu da Mata Medeiros; http://lattes.cnpq.br/8911039344594817One of the main concerns of a system planner is to size generation equipment, mainly for meeting the load growth and to achieve certain spinning reserve requirements. In general, generation systems must be sized with sufficient capacity, flexibility and robustness to respond to several operational challenges. However, the volatility and variability that comes from renewable generation is a relatively recent concern for the system planners. This thesis evaluates the potential of diverse wind power patterns to balance the global power output of wind farms using the concept of operating reserve assessment. To achieve this, operating reserve assessment models are utilized to evaluate bulk generation systems under several conditions of wind power geographic distribution. Different wind behavior patterns and wind power penetration levels are tested using a modified configuration of the IEEE RTS-96 and a planning configuration of the Portuguese Generation System. The results highlight that on a large country scale system with different wind characteristics, the diversification of wind behavior might be conducive to a compensation of wind power fluctuations, which may significantly decrease the need for system operating reserves. This effect is verified using probability distribution functions of reserve needs estimated by sequential Monte Carlo simulations (SMCS), such that useful information regarding generation capacity flexibility is drawn from the evaluations.Item Acesso aberto (Open Access) Circuladores de grafeno de banda ultralarga para região THz(Universidade Federal do Pará, 2019-06-07) SILVA, Samara Leandro Matos da; DMITRIEV, Victor Alexandrovich; http://lattes.cnpq.br/3139536479960191Non-reciprocal components are indispensable parts of many microwave and optical systems. In the future, THz communication systems will also require these components. Existing publications show that the bandwidth of graphene-based circulators in the THz region can be 10% to 20% with the use of rather complicated structures. The suggested circulators are formed by a graphene junction with concave pattern connected to the waveguides. Graphene is supported by SiO2/Si layers. The circulating behavior is based on the nonsymmetry of the graphene conductivity tensor that appears due to magnetization by a DC magnetic field normally applied to the plane of the graphene. We discuss the main parameters that define the bandwidth and its influence on it. Circulators have record bandwidth that is twice as high as those published. We have shown that the circulator Y can have the bandwidth of 42% in the frequency range (2.75 ÷ 4.2) THz, with the insulation better than −15 dB and the larger insertion losses that −2 dB, provided by the DC magnetic field polarization of 1.5 T and the chemical potential of 0.15 eV. For the two 4-port circulators we achieved a bandwidth of 44% for the same physical parameters.Item Acesso aberto (Open Access) Classification and characterization methods of non-tchnical losses on smart grid scenarios(Universidade Federal do Pará, 2024-03-28) BASTOS, Lucas de Lima; ROSÁRIO, Denis Lima do; http://lattes.cnpq.br/8273198217435163; https://orcid.org/0000-0003-1119-2450; CERQUEIRA, Eduardo Coelho; ttp://lattes.cnpq.br/1028151705135221Nowadays, grid resilience as a feature has become non-negotiable, significantly when power interruptions can impact the economy and society. Smart Grids (SGs) widespread popularity enables an immense amount of fine-grained e lectricity consumption data to be collected. However, risks can still exist in the Smart Grid (SG), since SG systems exchange valuable data, the distribution system loses substantial electrical energy. We divide this loss into two categories: technical and non-technical loss. A substantial amount of electrical energy is lost throughout the distribution system, and these losses are divided into two types: technical and non-technical. Non-technical losses (NTL) are any electrical energy consumed that is not invoiced. They may occur due to illegal connections, fraudulent activities, issues with energy meters such as delay in the installation or reading errors, contaminated, defective, or non-adapted measuring equipment, very low valid consumption estimates, faulty connections, and disregarded customers. Non-technical losses are the primary cause of revenue loss in the SG. Annually, electrical utilities incur billions in losses due to non-technical reasons. This thesis presents two detection methods of NTL: classification a nd c haracterization. We c reate a n ensemble predictor-based time series classifier t o c lassify N TL d etection. T his p redictor u ses the user’s energy consumption as a data input for classification, f rom s plitting t he d ata to executing the classifier. A lso, i t a ssumes t he t emporal a spects o f e nergy consumption data during the pre-processing, training, testing, and validation stages. The classification method has the advantage of classifying heterogeneous features in data. The characterization method proposes a study based on Information Theory Quantifiers (ITQ) to mitigate this challenge. First, we use a sliding window to convert the user’s energy consumption time series into a Bandt-Pompe (BP) probability distribution function. Then, we extract the used ITQ. Finally, we apply each metric to the Probability Density Function (PDF) and map the layers to characterize their behavior. The characterization method is advantageous to be used when we have big data. Overall, our best results have been recorded in the fraud detection-based time series classifiers (TSC) model, improving the empirical performance metrics by 10% or more over the other developed models. Our results show that users with normal and abnormal energy consumption can be distinguished using only Information Theory Quantifiers by considering the range of values for each metric.Item Acesso aberto (Open Access) Compressão de sinais em sistemas de rádio sobre fibra digital para redes fronthaul(Universidade Federal do Pará, 2019-07-23) MATE, Dércio Manuel; TEIXEIRA, António Luis de Jesus; OLIVEIRA, Rosinei de Sousa; http://lattes.cnpq.br/3853897074036715; COSTA, João Crisóstomo Weyl Albuquerque; http://lattes.cnpq.br/9622051867672434The introduction of technologies such as Carrier Aggregation (CA), Massive Multiple Input Multiple Output (MIMO) and Coordinated Multipoint (CoMP), aiming to improve the performance of LTE and LTE-A systems, increases the challenge for deploying Mobile Fronthaul due to the network capacity limitation to support higher transmission rates. An approach to deal with Frontahul’s capacity limitation is data compression. Several techniques have been developed for signal compression in fronthaul, and most of these techniques compress the signal transmitted in baseband. In this work, a compression technique is developed for specific scenarios of Digital Radio-over-Fiber systems, transmitting the signal in intermediate frequency (IF). This technique uses the radio channel state information (CSI) to control signal compression in the fronthaul. The simulation results with the developed technique demonstrate its ability to reduce the data transmitted onthe network by 45.05%. In addition, this technique allows the transmission of 64 QAM modulated signals using a lower quantizer resolution, e.g., 4 bits per sample, maintaining the EVM below 3GPP recommended threshold (8%). Finally, the performance of the fronthaul network is evaluated experimentally in an optical link of 20-km, considering scenarios with and without signal compression.Item Acesso aberto (Open Access) Contribuição do controle secundário de tensão aplicado em um parque eólico composto de aerogeradores dfig à estabilidade de tensão de longo-prazo(Universidade Federal do Pará, 2019-08-30) MATOS, Kayt Nazaré do Vale; AFFONSO, Carolina de Mattos; http://lattes.cnpq.br/2228901515752720; VIEIRA, João Paulo Abreu; http://lattes.cnpq.br/8188999223769913This thesis investigates the use of secondary voltage control (SVC) in a wind park based on doubly fed induction generator (DFIG) and its effect on long-term voltage stability. The wind park consists of several wind turbines is modeled as an DFIG equivalent model. Initially, the performance of the SVC applied to wind park is compared with the case when only the primary voltage control (PVC) is adopted. A detailed analysis is conducted with time-domain simulations, considering high and low wind speed regimes, control variable limits of wind generators, static and dynamic loads, as well as dynamic models of overexcitation limiter (OEL) and load tap changing (LTC) transformer. Based on the results, the use of secondary voltage control in a DFIG-based wind park can postpone long-term voltage collapse of power system. Further, an adverse situation was observed showing that SVC can lead the grid-side converter (GSC) of DFIG to absorb reactive power from the electric grid and lose the capability of injecting reactive power in the grid. Thus, two novel auxiliary control strategies inserted in the GSC control loop are presented to prevent reactive reverse flow in the GSC, as well as forcing the provision of reactive power to the system via the GSC. The results indicate the effectiveness of the auxiliary control strategies in postponing the voltage collapse and increase the voltage stability margin of the system.Item Acesso aberto (Open Access) Desenvolvimento a eventos discretos de um controlador de balanceamento de fases para sistemas legados de baixa tensão e microgrids(Universidade Federal do Pará, 2019-06-10) VILCHEZ, José Ruben Sicchar; SILVA, José Reinaldo; http://lattes.cnpq.br/9317869378701106; COSTA JÚNIOR, Carlos Tavares da; http://lattes.cnpq.br/6328549183075122In the up-grading of the legacy low-voltage system as urban microgrids, phase - balance algorithm development becomes useful and important to ensures robust and reliable load balancing, establish an efficient automation workflow among consumers, the legacy lowvoltage grid and the supervision center of the distribution network of electrical power. It constituting an alternative. This may constitute an alternative phase-balancing control system based on consumer units dynamic switching rather than electrical current injection by microgrids. Formal automation design of these algorithms become an interesting milestone for performance evaluation and properties validation for their insertion in the new microgrid architecture. This may evaluate the system's reliable performance when verifying dynamic properties as well as, the univocal solutions that ensure load transfer and load stability robustness of low-voltage grid, without operation interruptions neither conflicting events. This work, proposes a new phase-load- balancing control system based on combined algorithms resulting from a Hierarchical Petri net system design. Through this model it was obtained an optimized and reliable automated workflow of load balance in the low-voltage grid phases, with an efficient choice of consumer units for the switching process, aiming to obtain a robust steady state of load against unbalances between phases, and neutral current minimized. From the model obtained called “Transformer- Phase Balancing Controller” (T-PBC) were developed four integrated algorithms: the Load Transfer Algorithm, that calculates the load imbalance level and power to be transferred in the transformer phases; the Consumption Diagnose Algorithm, that identifies the load levels margins in each consumer unit; the Consumption Forecast Algorithm, that forecast the monthly energy future states in consumers; and the Switch Selection Algorithm, that selects the consumers units to switch based on the future state of energy consumption, the load level margins and the average of the energy future states. Based on the performance results, it was obtained, the efficient reduction of the neutral current and the load average unbalance in the low-voltage grid phases, with load stability robustness about three months, making it an efficient alternative system against load unbalances in the legacy low-voltage grid and the microgrids.Item Acesso aberto (Open Access) Desenvolvimento e avaliacao empirica de um simulador educacional para o apoio ao ensino de ECG, baseado na orientacão espacial do coração.(Universidade Federal do Pará, 2018-09-21) PONTES, Paulo André Ignácio; SERUFFO, Marcos Cesar da Rocha; http://lattes.cnpq.br/3794198610723464; FRANCÊS, Carlos Renato Lisboa; http://lattes.cnpq.br/7458287841862567The electrocardiogram (ECG) is one of the most commonly used diagnostic procedures in medicine, so it is essential that undergraduate medical students learn to interpret it correctly while they are still in training. Of course, students go through classical learning (ex: lectures and lectures). However, they are generally not efficiently trained in ECG interpretation. In this regard, educational support methodologies and tools in medical practice, such as educational software, should be considered as a valuable approach for medical training purposes. This thesis deals with the development of a simulator (VETOECG) that allows experiential teaching, so that students can relate to projections of the cardiac electrical vectors, through the manipulation of the spatial orientation of the heart and the repercussions in their respective waves in the ECG. In addition, this thesis reports a formal experiment (pre / posttest with a randomized control group) to evaluate empirically the learning effetiveness of the tool and analyzes the subjective factors of students' perception regarding motivation, user experience and collected through questionnaires. The results indicated that the simulator has positive learning efficacy compared to traditional methodologies (statistically significant difference, p-value <0.0001 *, median of 38.5 points and interquartile range 23.1 to 46.2 points) used for learning in the proposed study. It can be verified that the simulator is adequate in the most diverse dimensions, since they were evaluated positively: in terms of motivation (88.15%), user experience (76%) and learning (96.5%).Item Acesso aberto (Open Access) Designing feasible deployment strategies for cell-free massive MIMO networks : assessing cost-effectiveness and reliability(Universidade Federal do Pará, 2024-06-14) FERNANDES, André Lucas Pinho; MONTI, Paolo; http://lattes.cnpq.br/4220330196422554; COSTA, João Crisóstomo Weyl Albuquerque; http://lattes.cnpq.br/9622051867672434Cell-free Massive Multiple-Input Multiple-Output (mMIMO) networks are a promising solution for the Sixth Generation of mobile systems (6G) and beyond. These networks utilize multiple distributed antennas to transmit and receive signals coherently, under an apparently non-cellular communication paradigm that eliminates the traditional concept of cells in mobile networks. This shift poses significant deployment challenges, as conventional tools designed for cellular systems are inadequate for planning and evaluating cell-free mMIMO architectures. In this sense, the literature has been developing models specific to cell-free mMIMO that deal with system coordination, fronthaul signaling, required computational complexities of processing procedures, segmented fronthaul, transitioning from cellular network deployments, and integration to Open Radio Access Network (O-RAN) technologies. These advancements are instrumental in transforming cell-free mMIMO from a theoretical system to a practical application. Despite this, further study is needed to integrate existing models and develop practical evaluation tools to assess the feasibility of cell-free mMIMO and its enablers. This thesis addresses these gaps by proposing new tools to evaluate the feasibility of cell-free mMIMO networks regarding reliability and costs. The first tool focuses on evaluating the reliability of cell-free mMIMO. It is used to improve the understanding of possible failure impacts and to develop effective protection schemes for the fronthaul network of cell-free mMIMO networks. Results for an indoor office implementation with an area of 100 m2 and a Transmission-Reception Point (TRP) spacing of 20 m, demonstrate that cell-free systems with segmented fronthaul, i.e., with serial fronthaul connections between TRPs, require protection strategies. It is shown that interconnecting serial chains and partially duplicating serial chains (40% redundancy) are effective protection schemes. Finally, in the considered indoor scenarios, interconnection appears to be the most feasible alternative when the number of serial chains is higher than three. The second tool assesses the Total Cost of Ownership (TCO) of cell-free mMIMO and its enablers, considering essential aspects, like user demands, fronthaul bandwidth limitations, and hardware processing capacities. The tool is used to evaluate the costs of two functional splits from the literature that are equivalent to distributed and centralized processing architectures for cell-free mMIMO networks. Results for an ultra-dense urban scenario covering an area of 0.25 km2 with up to 800 TRPs, reveal that centralized processing is more feasible for most user demands, hardware configurations of TRP, and cost considerations. Despite this, distributed processing may be more feasible in limited cases of low demand (up to 50 Mbps per user) and under massive cost reductions for expenses related to TRPs deployment.