Navegando por Assunto "Redes neurais profundas"
Agora exibindo 1 - 2 de 2
- Resultados por página
- Opções de Ordenação
Dissertação Acesso aberto (Open Access) 5G MIMO and LIDAR data for machine learning: mmWave beam-selection using deep learning(Universidade Federal do Pará, 2019-08-29) DIAS, Marcus Vinicius de Oliveira; KLAUTAU JÚNIOR, Aldebaro Barreto da Rocha; http://lattes.cnpq.br/1596629769697284Modern communication systems can exploit the increasing number of sensor data currently used in advanced equipment and reduce the overhead associated with link configuration. Also, the increasing complexity of networks suggests that machine learning (ML), such as deep neural networks, can effectively improve 5G technologies. The lack of large datasets make harder to investigate the application of deep learning in wireless communication. This work presents a simulation methodology (RayMobTime) that combines a vehicle traffic simulation (SUMO) with a ray-tracing simulator (Remcom’s Wireless InSite), to generate channels that represents realistic 5G scenarios, as well as the creation of LIDAR sensor data (via Blensor). The created dataset is utilized to investigate beam-selection techniques on vehicle-to-infrastructure using millimeter waves on different architectures, such as distributed architecture (usage of the information of only a selected vehicle, and processing of data on the vehicle) and centralized architectures (usage of all present information provided by the sensors in a given moment, processing at the base station). The results indicate that deep convolutional neural networks can be utilized to select beams under a top-M classification framework. It also shows that a distributed LIDAR-based architecture provides robust performance irrespective of car penetration rate, outperforming other architectures, as well as can be used to detect line-of-sight (LOS) with reasonable accuracy.Dissertação Acesso aberto (Open Access) Compression of activation signals from partitioned deep neural networks exploring temporal correlation(Universidade Federal do Pará, 2024-11-27) SILVA, Lucas Damasceno; KLAUTAU JÚNIOR, Aldebaro Barreto da Rocha; http://lattes.cnpq.br/1596629769697284The use of artificial neural networks for object detection, along with advancements in 6G and IoT research, plays an important role in applications such as drone-based monitoring of structures, search and rescue operations, and deployment on hardware platforms like FPGAs. However, a key challenge in implementing these networks on such hardware is the need to economize computational resources. Despite substantial advances in computational capacity, implementing devices with ample resources remains challenging. As a solution, techniques for partitioning and compressing neural networks, as well as compressing activation signals (or feature maps), have been developed. This work proposes a system that partitions neural network models for object detection in videos, allocating part of the network to an end device and the remainder to a cloud server. The system also compresses the feature maps generated by the last layers on the end device by exploiting temporal correlation, enabling a predictive compression scheme. This approach allows neural networks to be embedded in low-power devices while respecting the computational limits of the device, the transmission rate constraints of the communication channel between the device and server, and the network’s accuracy requirements. Experiments conducted on pre-trained neural network models show that the proposed system can significantly reduce the amount of data to be stored or transmitted by leveraging temporal correlation, facilitating the deployment of these networks on devices with limited computational power
