Browsing by Author "Crupi, Felice"
Now showing 1 - 20 of 51
- Results Per Page
- Sort Options
Item <> cloud-assisted, agent based framework for cyber-physical systems(2016-02-19) Vinci, Andrea; Crupi, Felice; Spezzano, GiandomenicoItem <> Defect-Centric analysis of the channel hot carrier degradation(2016-02-02) Pròcel-Moya, Luis-Miguel; Pantano, Pietro; Crupi, FeliceDurante l'ultimo decennio, il channel hot carrier (CHC) e stato considerato uno dei pi u importanti meccanismi di degrado della moderna tecnologia CMOS. La degradazione CHC si veri- ca quando un voltaggio superiore a quello di saturazione viene applicato sul terminale di drain e, contemporaneamente, un voltaggio superiore alla tensione di soglia (VTH) viene applicato sul terminale di gate. Nel presente lavoro, abbiamo utilizzato la cosiddetta defect-centric distribution (DCD) per spiegare e descrivere il meccanismo di degradazione CHC. Il DCD si basa su due presupposti: il VTH prodotto da una singola carica segue una distribuzione esponenziale (valore medio ) e il numero to- tale di difetti segue la distribuzione di Poisson (valore medio Nt). La combinazione di questi due presupposti da come risultato la DCD. Negli ultimi anni, la distribuzione DCD e stata usata per descrivere e spiegare la bias temperature instability (BTI) ed e in grado di predirre le code estreme della distribuzione VTH no a 4 . Il vantaggio di usare il DCD e che i suoi primi e secondi momenti sono direttamente correlati ai parametri sici e Nt. Nel presente lavoro, e stato dimostrato che il DCD e anche in grado di descrivere e spiegare il degrado della distribuzione VTH no a 3 . E stata studiata la dipendenza dei parametri de ect-centric, e Nt, in relazione alla geometria del dispos- itivo. E stato dimostrato che e inversamente proporzionale all'area del dispositivo come in la degradazione BTI. Inoltre, il valore previsto della distribuzione VTH (< VTH >) si incre- menta fortemente quando la lunghezza di canale (L) diminuisce e si incrementa debolmente con il decremento della larghezza del dispositivo (W). In la degradazione BTI, si riporta che non vi e alcuna dipendenza tra < VTH > ed L. Pertanto, la forte dipendenza trovata e da atribuire alla degradazione CHC. Si e anche studiata la dipendenza della temperatura (T) dei parametri defect-centric e abbiamo trovato che non dipende da T, al contrario degli esperimenti BTI, dove invece Nt aumenta con T, fatto che si spiega con l'attivazione del meccanismo di dispersione elettrone-elettrone. Inoltre, abbiamo estratto una energia di attivazione di 56meV per Nt. Finalmente, abbi- amo usato dispositivi matching-pair con la nalit a di studiare la variabilit a tempo zero e la variabilit a dipendente dal tempo. E stato dimostrato che il tempo di stress e la tensione di stress applicati sul terminale di drain non in uenzano la variabilit a.Item <> methodology for the development of autonomic and cognitive internet of things ecosystems(2018-06-08) Savaglio, Claudio; Crupi, Felice; Fortino, GiancarloAdvancements on microelectromechanical systems, embedded technologies, and wireless communications have recently enabled the evolution of conven- tional everyday things in enhanced entities, commonly de ned Smart Objects (SOs). Their continuous and widespread di usion, along with an increasing and pervasive connectivity, is enabling unforeseen interactions with conven- tional computing systems, places, animals and humans, thus fading the bound- ary between physical and digital worlds. The Internet of Things (IoT) term just refers to such futuristic scenario, namely a loosely coupled, decentralized and dynamic ecosystem in which bil- lions (even trillions) of self-steering SOs are globally interconnected becoming active participants in business, logistics, information and social processes. In- deed, SOs are able to provide highly pervasive cyberphysical services to both humans and machines thanks to their communication, sensing, actuation, and embedded processing capabilities. Nowadays, the systemic revolution that can be led through the complete realization of the IoT vision is just at its dawn. As matter of facts, whereas new IoT devices and systems have been already developed, they often result in poorly interoperating \Intra-nets of things", mainly due to the heterogeneity featuring IoT building blocks and the lack of standards. Thus, the develop- ment of massive scaled (the total number of \things" is forecasted to reach 20.4 billion in 2020) and actually interoperable IoT systems is a challenging task, featured by several requirements and novel, even unsurveyed, issues. In this context, a multidisciplinary and systematic development approach is necessary, so to involve di erent elds of expertise for coping with the cy- berphysical nature of IoT ecosystem. Henceforth, full- edged IoT methodolo- gies are gaining traction, aiming at systematically supporting all development phases, addressing mentioned issues, and reducing time-to-market, e orts and probability of failure. In such a scenario, this Thesis proposes an application domain-neutral, full- edged agent-based development methodology able to support the main engineering phases of IoT ecosystems. The de nition of such systematic approach resulted in ACOSO-Meth (Agent-based COoperating Smart Objects Methodology), which is the major contribution of this thesis along with other interesting research e orts supporting (i.e., a multi-technology and multi- protocol smartphone-based IoT gateway) and extending (i.e., a full- edged approach to the IoT services modeling according to their opportunistic prop- erties) the main proposal. Finally, to provide validation and performance eval- uation of the proposed ACOSO-Meth approach, four use cases (related to di erent application contexts such as a smart university campus, a smart dig- ital library, a smart city and a smart workshop) have been developed. These research prototypes showed the e ectiveness and e ciency of the proposed approach and improved their respective state-of-the-art. iiItem Analisi della socialità multi-livello nelle reti opportunistiche per la propagazione dei messaggi(2019-06-08) Caputo, Antonio; Crupi, Felice; Marano, Salvatore ,; De Rango, FlorianoItem Analysis and development of physical and MAC Layer protocols in mobile ad Hoc networks involving directional antenna communications(2019-06-20) Inzillo, Vincenzo; Crupi, Felice; De Rango, FlorianoMost recent Studies and Researches in IT (Information Technology) are bringing to an increasing development of Pervasive Communication Environments Systems such as MANET and Sensor Networks that assumed great importance, since 802.1X development IEEE Standards, due to their features based on nodes mobility and power consumption that lead to the rise of several protocols which implements different designs about routing algorithms and QoS (Quality of Service) specifications. Conventionally, these kinds of network environments are equipped in their physical layer with Isotropic and Omnidirectional Antennas Systems, that lead to a radiation pattern with a constant gain in all TX/RX directions so it results in a non-directive behavior of nodes. In this context there are lots drawbacks that heavily affect and reduce protocols efficiency and SNR (Signal to Noise Ratio) such as: communication reliability, latency, scalability, power and energy consumption. For example, using an isotropic antenna in nodes, without position knowing mechanism, bring to a notable waste of consumption energy due to the non-directive behavior because the same power is transmitted/received in all directions. To overlapping this drawback are developed in last years the so called Smart Antenna Systems that usually consists of several directive radiation elements implementing adaptive algorithms for the estimation of DOA (Direction of Arrival) and SOI (Signal Of interest); for this purpose are employed beamforming techniques that are largely used in Radar Communication Systems and Phased Array Systems. The resulted radiation pattern generates a beam that should be electronically controlled, and the main beam should be pointed towards the direction of interest in communication transmission/reception. The beam is generated according an adaptive algorithm (i.e. Least Mean Square) that models the weight vector as Smart Antenna input System. Beamforming techniques take lots advantages in medium access control, effectively, employing of SDMA (Spatial Division Multiple Access) allows a great efficiency protocol growth. MANET performance can be enhanced if more efficient antenna systems such as Massive MIMO (Multiple Input Multiple Output) systems are employed; indeed, massive MIMO underlie the development of 5G Mobile Wireless Network environments. However, despite of their capability to improve network performance they introduce different kinds of issues especially in terms of energy consumption that should be addressed. The main purpose of this thesis is to limit most of the mentioned issues related to directional communications in MANET in order to improve the current state of art referring both to protocols and network performance. From a protocol point of view, important to highlight that the most of the overall contribution of the present work aims to address energy efficiency, deafness problem and finally, mobility issues occurring at physical and MAC (Medium Access Layer) layer. The reminder of the thesis is the following: Chapter 1: introduces main concepts about network communications using directional and omnidirectional antennas in MANET and their common related issues. Chapter 2: gives basics and fundamentals theoretical notions about Smart Antenna Systems (SAS) and Massive MIMO with particular emphasis to beamforming algorithms. Chapter 3: essentially, this chapter is divided into two parts. The first one, illustrates basic features of the main instrument used for experimental analysis that is the Omnet++ network simulator. The second part exposes the most significant works produced to extend the default Omnet++ framework for enabling simulation scenarios supporting SAS and massivo MIMO systems. Chapter 4: provides a detailed discussion about deafness problem in MANET directional communications and subsequently illustrates the most significant proposals in this field with a special focus on designed Round-Robin based approaches. Chapter 5: describes main issues related to mobility and energy consumption of nodes in directional MANET with particular attention to handoff problem. Nevertheless, it illustrates novel proposed strategies aiming to mitigate energy consumption in very high gain beamforming communications employing SAS and massive MIMO systems. All of the above chapters are organized in a similar way. More specifically, each chapter consists of three main parts: Background: gives a briefly theoretical explanation of the most important concepts mentioned in the chapter. State of art: illustrates the most significant works related to topics encountered in the chapter. Personal contribution: highlights the main contribution achieved (by author of this thesis) allowing to improve the current state of art related to a particular topic.Item Anomalies in cyber security: detection, prevention and simulation approaches(2018-07-03) Argento, Luciano; Crupi, Felice; Furfaro, Angelo; Angiulli, FabrizioWith themassive adoption of the Internet both our private andworking life has drastically changed. The Internet has introduced new ways to communicate and complete every day tasks. Organisations of any kind have taken their activities online to achieve many advantages, e.g. commercial organisations can reach more customers with proper marketing. However, the Internet has also brought various drawbacks and one of these concerns cyber security issues. Whenever an entity (e.g. a person or company) connects to the Internet it immediately becomes a potential target of cyber threats, i.e. malicious activities that take place in cyberspace. Examples of cyber threats are theft of intellectual property and denial of service attacks. Many efforts have been spent to make the Internet perhaps the most revolutionary communication tool ever created, but unfortunately little has been done to design it in a secure fashion. Since the massive adoption of the Internet we have witnessed a huge number of threats, perpetrated by many different actors such as criminal organisations, disgruntled workers and even people with little expertise, thanks to the existence of attack toolkits. On top of that, cyber threats are constantly going through a steady evolution process and, as a consequence, they are getting more and more sophisticated. Nowadays, the cyber security landscape is in a critical condition. It is of utmost importance to keep up with the evolution of cyber threats in order to improve the state of cyber security. We need to adapt existing security solutions to the ever-changing security landscape and devise new ones when needed. The research activities presented in this thesis find their place in this complex scenario. We investigated significant cyber security problems, related to data analysis and anomaly detection, in different areas of research, which are: Hybrid Anomaly Detection Systems; Intrusion Detection Systems; Access Control Systems and Internet of Things. Anomaly detection approaches are very relevant in the field of cyber security. Fraud and intrusion detection arewell-known research areaswhere such approaches are very important. A lot of techniques have been devised, which can be categorised in anomaly and signature based detection techniques. Researchers have also spent much effort on a third category of detection techniques, i.e. hybrid anomaly detection, which combine the two former approaches in order to obtain better detection performances. Towards this direction, we designed a generic framework, called HALF, whose goal is to accommodate multiple mining algorithms of a specific domain and provide a flexible and more effective detection capability. HALF can be easily employed in different application domains such as intrusion detection and steganalysis due to its generality and the support provided for the data analysis process. We analysed two case studies in order to show how HALF can be exploited in practice to implement a Network Intrusion Detection System and a Steganalysis tool. The concept of anomaly is a core element of the research activity conducted in the context of intrusion detection, where an intrusion can be seen as an anomalous activity that might represent a threat to a network or system. Intrusion detection systems constitute a very important class of security tools which have become an invaluable defence wall against cyber threats. In this thesis we present two research results that stemfromissues related to IDSs that resort to the n-grams technique. The starting point of our first contribution is the threat posed by content-based attacks. Their goal is to deliver malicious content to a service in order to exploit its vulnerabilities. This type of attacks has been causing serious damages to both people and organisations over these years. Some of these attacks may exploit web application vulnerabilities to achieve goals such as data theft and privilege escalation, which may lead to enormous financial loss for the victim. IDSs that exploit the n-gram technique have proven to be very effective against this category of cyber threats. However, n-grams may not be sufficient to build reliable models that describe normal and/or malicious traffic. In addition, the presence of an adversarial attacker is not properly addressed by the existing solutions. We devised a novel anomaly-based intrusion detection technique, called PCkAD to detect content-based attacks threatening application level protocols. PCkAD models legitimate traffic on the basis of the spatial distribution of the n−grams occurring in the relevant content of normal traffic and has been designed to be resistant to blending evasion techniques. Indeed, we demonstrate that evading is an intrinsically difficult problem. The experiments conducted to evaluate PCkAD show that it achieves state of the art performances in real attack scenarios and that it performs well against blending attacks. The second contribution concerning intrusion detection investigates issues that may be brought by the employment of the n-gram technique. Many approaches using n-grams have been proposed in literature which typically exploit high order n-grams to achieve good performance. However, because the n-gram domain grows exponentially with respect to the n-gram size, significant issues may arise, from the generation of huge models to overfitting. We present an approach aimed to reduce the size of n-grambased models, which is able build models that contain only a fraction of the original n-grams with little impact on the detection accuracy. The reported experiments, conducted on a real word dataset, show promising results. The research concerning access control systems focused on anomalies that represent attempts of exceeding or misusing access controls to negatively affect the confidentiality, integrity or availability of a target information system. Access control systems are nowadays the first line of defence of modern computing systems. However, their intrinsic static nature hinders autonomously refinement of access rules and adaptation to emerging needs. Advanced attributed-based systems still rely on mainly manual administration approaches and are not effective on preventing insider threat exploiting granted access rights. We introduce a machine learning approach to refine attribute-based access control policies based on behavioural patterns of users’ access to resources. The designed system tailors a learning algorithm upon the decision tree solutions. We analysed a case study and conducted an experiment to show the effectiveness of the system. IoT is the last topic of interest in the present thesis. IoT is showing the potential for impacting several domains, ranging from personal to enterprise environments. IoT applications are designed to improve most aspects of both business and citizens’ lives, however such emerging technology has become an attractive target for cybercriminals. A worrying security problem concerns the presence of many smart devices that have security holes. Researchers are investing their efforts in the evaluation of security properties. Following this direction, we show that it is possible to effectively assess cyber security scenarios involving IoT settings by combining novel virtual environments, agent-based simulation and real devices and then achieving a means that helps prevent anomalous actions fromtaking advantage of security holes for malicious purposes. We demonstrate the effectiveness of the approach through a case study regarding a typical smart home setting.Item Bio-inspired techniques applied to the coordination of a swarm of robots involved in multiple tasks(2017-11-13) Palmieri, Nunzia; Crupi, Felice; Marano, Salvatore; Yang, Xin-SheLa tematica di ricerca trattata in questa tesi riguarda il problema di coordinamento di robot attraverso l’utilizzo di algoritmi decentralizzati che usano meccanismi basati sulla Swarm Intelligence. Tali tecniche hanno lo scopo di migliorare le capacità di ogni robot, ciascuno dei quali ha risorse limitate, nel prendere decisioni su dove muoversi o su cosa fare basandosi su semplici regole ed interazioni locali. Negli ultimi anni, infatti, c’è un crescente interesse a risolvere alcuni problemi nell’ambito della robotica attraverso algoritmi che traggono ispirazione da fenomeni naturali e da alcuni animali in natura che esibiscono comportamenti sociali sviluppati e con una notevole capacità di adattamento ambientale. Nel campo della robotica, un aspetto cruciale è la coordinazione dei robot affinché possano compiere dei task in maniera cooperativa. La coordinazione deve essere tale da permettere agli agenti di adattarsi alle condizioni dinamiche dell’ambiente circostante conferendo al sistema caratteristiche di robustezza, flessibilità e affidabilità. Più dettagliatamente, lo scenario di riferimento è un’area nella quale sono disseminati degli oggetti, e dove operano un certo numero di robot che hanno come scopo quello di rilevare gli oggetti stessi e manipolarli. Ciascun robot non conosce la posizione di tali oggetti e non ha conoscenza né dell’ambiente che lo circonda, né della posizione degli altri robot. Il problema è diviso in due sotto-problemi. Un primo problema riguarda l’esplorazione dell’area e l’altro la manipolazione degli oggetti. Essenzialmente, ogni robot esplora in maniera indipendente l’ambiente basandosi sulla propria posizione attuale e sulla posizione degli altri mediante un meccanismo di comunicazione indiretta (stigmergia). Nella fase di manipolazione degli oggetti, invece, è utilizzato un meccanismo di comunicazione diretta attraverso l’uso di una comunicazione wireless. L’algoritmo di esplorazione dell’area trae ispirazione dal comportamento di alcuni tipi di insetti in natura, come le formiche,che utilizzano l’ambiente nel quale vivono come mezzo di comunicazione (stigmergia).Successivamente, quando un robot rileva la presenza di un oggetto, sono stati proposti due approcci. Nel primo caso le informazioni sono diffuse tra i robot secondo un meccanismo di comunicazione“one hop”ed alcune meta-euristiche di derivazione naturale sono state utilizzate come meccanismo decisionale e di coordinamento Il secondo approccio fa riferimento ad una comunicazione “multi-hop” ed è stato proposto un protocollo di coordinamento, anche esso di derivazione biologica. Entrambi gli approcci si basano su meccanismi decentralizzati dove non esiste nessun leader che dia direttive gerarchiche e ciascun robot prende le sue decisioni in maniera autonoma sulla base degli eventi che accadono nell’ambiente. Globalmente si ha un sistema auto organizzato, flessibile ed altamente adattabile. Per testare gli approcci è stato costruito un simulatore sul quale sono stati sviluppati numerosi studi allo scopo di valutare gli algoritmi proposti, la loro efficienza nonché stimare come le principali variabili ed i parametri del modello possono influenzarela soluzione finale.Item Circuit and architecture solutions for the low-voltage, low-power domain(2016-02-02) Albano, Domenico; Pantano, Pietro; Crupi, FeliceItem Constraint satisfaction: algorithms, complexity results, and applications(2016-02-19) Lupia, Francesco; Crupi, Felice; Scarcello, Francesco; Greco, GianluigiA fundamental problem in the eld of Arti cial Intelligence and related disciplines, in particular Database theory, is the constraint satisfaction problem (or CSP) which comes as a unifying framework to express a wide spectrum of computational problems. Examples include graph colorability, planning, and database queries. The goal is either to nd one solution, to enumerate all solutions, or counting them. As a very general problem, it comes with no surprise that in most settings CSPs are hard to solve. Indeed considerable e ort has been invested by the scienti c community to shed light on the computational issues of this problem, with the objective of identifying easy instances (also called islands of tractability) and exploiting the knowledge derived from their solution to help solving the harder ones. My thesis investigates the role that structural properties play in the computational aspects of CSPs, describes algorithms to exploit such properties, and provides a number of speci c tools to solve e ciently problems arising in database theory, game theory, and process mining.Item Data mining techniques for large and complex data(2017-11-13) Narvaez Vilema, Miryan Estela; Crupi, Felice; Angiulli, FabrizioDuring these three years of research I dedicated myself to the study and design of data mining techniques for large quantities of data. Particular attention was devoted to training set condensing techniques for the nearest-neighbor classification rule and to techniques for node anomaly detection in networks. The first part of this thesis was focused on the design of strategies to reduce the size of the subset extracted from condensing techniques and to their experimentation. The training set condensing techniques aim to determine a subset of the original training set having the property of allowing to correctly classify all the training set examples. The subset extracted from these techniques also known as consistent subset. The result of the research was the development of various strategies of subset selection, designed to determine during the training phase the most promising subset based on different methods of estimating test accuracy. Among them, the PACOPT strategy is based on Pessimistic Error Estimate (PEE) to estimate generalization as a trade-off between training set accuracy and model complexity. The experimental phase has had for reference the FCNN technique of condensation. Among the methods of condensation based on the nearest neighbor decision rule (NN rule), FCNN (for Fast Condensed NN) it is one of the most advantageous technique, particularly in terms of time performance. We showed that the designed selection strategies guarantee to preserve the accuracy of a consistent subset. We also demonstrated that the proposed selection strategies guarantee to significantly reduce the size of the model. Comparison with notable training-set reduction techniques for the NN rule witness for state-of-the-art performances of the here introduced strategies. The second part of the thesis is directed towards the design of analysis tools for network structured data. Anomaly detection is an area that has received much attention in recent years. It has a wide variety of applications, including fraud detection and network intrusion detection. The techniques focused on anomaly detection in static graphs assume that the networks do not change and are capable of representing only a single snapshot of data. As real-world networks are constantly changing, there has been a shift in focus to dynamic graphs, which evolve over time. We present a technique for node anomaly detection in networks where arcs are annotated with time of creation. The technique aims at singling out anomalies by taking simultaneously into account information concerning both the structure of the network and the order in which connections have been established. The latter information is obtained by timestamps associated with arcs. A set of temporal structures is induced by checking certain conditions on the order of arc appearance denoting different kinds of user behaviors. The distribution of these structures is computed for each node and used to detect anomalies. We point out that the approach here investigated is substantially different from techniques dealing with dynamic networks. Indeed, our aim is not to determine the points in time in which a certain portion of the networks (typically a community or a subgraph) exhibited a significant change, as usually done by dynamic-graph anomaly detection techniques. Rather, our primary aim is to analyze each single node by taking simultaneously into account its temporal footprint.Item Design of back contact solar cells featuring metallization schemes with multiple emitter contact lines based on TCAD numerical simulations(2017-11-13) Guevara Granizo, Marco Vinicio; Crupi, FeliceThe most hard-working goal within PV community is to design and manufacture devices featuring high-efficiency at low-cost with the better reliability as possible. The key to achieving this target is to optimize and improve the current fabrication processes as well as the layouts of the devices. TCAD modeling of PV devices turns out to be a powerful tool that lowers laboratory manufacturing costs and accelerates optimization processes by bringing guidelines of how to do it. The modeling in TCAD examines the designs before their implementation, accurately predicting its real behavior. When simulations are correctly calibrated, by changing simulations’ parameters, allow finding ways to improve designs’ parameters or just understand better the internal functioning of these devices. In this regard, this Ph.D. thesis fairly treats the electro-optical numerical simulations of interdigitated back-contact (IBC) c-Si solar cells, which nowadays is the architecture to which industry is trying to pull forward because of its numerous advantages. Among the benefits of this design are their improved efficiency due to the absence of front optical shading or the relative simplicity regarding their massive production. The aim of this thesis, it is focusing on providing guidelines of the optimal design parameters of IBC solar cells, based on the state-of-the-art of advanced numerical simulations. Two main topics are treated, (i) the development of a simplified method to compute the optical profiles ten times faster than the traditional one and (ii) an extensive study on the impact of adding multiple striped metal contacts throughout the emitter region improving the efficiency by reducing the inner series resistance. It was performed a large number of ad-hoc calibrated simulations that sweep wide ranges of modeling parameters (i.e., changing geometric sizes, doping profiles, carriers’ lifetimes, and recombination rates) to investigate their influence over the device operation, allowing to identify the most critical ones. This insight leads a better understanding of this kind of solar cells and helps to appraise ways to refine structures and enhance layouts of real devices for either laboratory or industry.Item Design of interdigitated back-contact solar cells by means of TCAD numerical simulations(2017-02-13) Maccaronio, Vincenzo; Pantano, Pietro; Cocorullo, Giuseppe; Crupi, FeliceLa promessa dell’energia solare come forma di energia principale è sempre più concreta, ma il nodo cruciale rimane il costo per Watt, che deve essere sempre di più avvicinato o finanche ulteriormente ridotto rispetto a quello delle reti di distribuzione energetiche esistenti. Un lavoro di ottimizzazione in termini di design e parametri di fabbricazione è quindi fondamentale per raggiungere questo obiettivo. Il silicio cristallino è il materiale maggiormente diffuso nell’industria fotovoltaica, per via di diversi fattori, tra cui l’ottimo rapporto costo/prestazioni e la vasta presenza di macchinari per la sua lavorazione, dovute al suo impiego pluridecennale nell’industria microelettronica. Fra le diverse tipologie di celle esistenti è stata scelta un’architettura che presenta entrambi i contatti metallici sul retro, chiamata per questo interdigitated back-contact (iBC). Questo particolare design offre numerosi vantaggi in termini di efficienza massima, costo di produzione ed estetica del pannello, in relazione alle celle convenzionali. Difatti, al momento attuale le maggiori efficienze in celle monogiunzione, sia a livello di laboratorio che di moduli commerciali, sono state ottenute utilizzando questa struttura, sulla quale un’approfondita attività di ricerca può quindi dimostrarsi di notevole interesse. Per il processo di analisi è stato scelto un approccio numerico, tramite l’uso del simulatore di dispositivi TCAD Sentaurus di Synopsys. L’utilizzo di simulazioni offre numerosi punti a favore rispetto all’ottimizzazione per mezzo di step ripetuti di fabbricazione. In primis, un vantaggio in termini di costi, non necessitando di macchinari, materiali e camere pulite. Inoltre un’analisi numerica rende possibile individuare ed evidenziare punti o cause specifiche di perdite o problemi di progettazione. La problematica maggiore di questo approccio risiede nella necessità di garantire l’affidabilità delle simulazioni e ciò è stato ottenuto mediante l’applicazione dello stato dell’arte di tutti i modelli fisici specifici coinvolti nel funzionamento di questo tipo di celle. La tematica di ricerca affrontata è stata quindi la progettazione di celle solari al silicio con contatti interdigitati sul retro tramite l’uso di simulazioni numeriche. Il lavoro di ottimizzazione è stato realizzato investigando uno spazio di parametri di fabbricazione molto vasto e ottenendo informazioni sui trend delle prestazioni al variare degli stessi. Nel primo capitolo è stata illustrata la fisica e i principi di funzionamento di una cella solare, iniziando dall’assorbimento della luce, passando alla sua conversione in cariche elettriche, per finire con la loro raccolta per generare potenza. I meccanismi di ricombinazione e le altre cause di perdite sono stati presentati ed esaminati. Nel secondo capitolo è stata dettagliata l’architettura di una cella solare, evidenziando le diverse regioni e presentando la struttura back-contact. Il terzo capitolo è stato dedicato alla spiegazione delle strategie di simulazione applicate in questo lavoro, con la definizione dei modelli fisici applicati e calibrati per assicurare l’accuratezza richiesta. Nei capitoli quattro e cinque sono stati presentati i risultati delle simulazioni effettuate, realizzate variando le caratteristiche geometriche delle diverse regioni della cella e i profili di drogaggio. Sono stati ottenuti i trend di comportamento relativi ai singoli parametri che, nel caso relativo ai drogaggi, permettono di affermare che per ogni regione l’andamento dell’efficienza ha una forma a campana, che presenta un ottimo di drogaggio relativo in un punto intermedio. Questo comportamento è dovuto, per bassi valori di drogaggio, all’effetto della ricombinazione sul contatto per BSF ed emettitore e della ricombinazione superficiale per l’FSF. Per alti valori di drogaggio, la degradazione dell’efficienza dipende dall’effetto della ricombinazione Auger per BSF ed emettitore e da quella superficiale per l’FSF. Per quanto riguarda i parametri geometrici, le analisi svolte evidenziano che il gap tra emettitore e BSF deve essere quanto più piccolo possibile, dato che all’aumentare della sua dimensione aumentano le perdite per effetto resistivo e di ricombinazione. È stato determinato che il valore ottimale di emitter coverage non è assoluto, ma dipende dalla resistività del bulk e dai drogaggi delle altre regioni, os cillando tra l’80% e il 90%. Per quanto riguarda il pitch ottimale, cioè la distanza tra i contatti, è stato determinato che maggiori efficenze corrispondono a valori minori, principalmente perché all’aumentare della distanza aumentano le resistenze parassite. Infine si è evidenziato che l’aggiunta di un secondo contatto sull’emettitore, equispaziato dal centro della regione, migliora l’efficienza totale poiché riduce le perdite resistive, soprattutto nel caso di celle con emettitori lunghi.Item Design of point contact solar cell by means of 3D numerical simulations(Università della Calabria -Dottorato di Ricerca in Information and Communication Engineering For Pervasive Intelligent Environments, 2017-11-13) Guerra González, Noemi Lisette; Crupi, FeliceNikola Tesla said that "the sun maintains all human life and supplies all human energy". As a matter of fact, sun furnishes with energy all forms of living, e.g., starting from the photosynthesis process, plants absorb solar radiation and convert it into stored energy for growth and development, thus supporting life on earth. For this reason, sun is considered one of the most important and plentiful sources of renewable energies. This star is about 4.6 billion years old with another 5 billion years of hydrogen fuel to burn in its lifetime. This characteristic gives to all living creatures a sustainable and clean energy source that will not run out anytime soon. In particular, solar power is the primary source of electrical and thermal energy, produced by directly exploiting the highest levels of the irradiated energy from the sun to our planet. Therefore, solar energy offers many benefits such as no-releasing greenhouse gases (GHGs) or other harmful gases in the atmosphere, it is economically feasible in urban and rural areas, and evenly distributed across the planet. Moreover, as it was mentioned above, solar power is also essentially infinite, reason why it is close to be the largest source of electricity in the world by 2050. On the other hand, most of the energy forms available on earth arise directly from the solar energy, including wind, hydro, biomass and fossil fuels, with some exceptions like nuclear and geothermal energies. Accordingly, solar photovoltaic (PV) is a technology capable of converting the inexhaustible solar energy into electricity by employing the electronic properties of semiconductor materials, representing one of the most promising ways for generating electricity, as an attainable and smart option to replace conventional fossil fuels. PV energy is also a renewable, versatile technology that can be used for almost anything that requires electricity, from small and remote applications to large, central power stations. Solar cell technology is undergoing a transition to a new generation of efficient, low-cost products based on certain semiconductor and photoactive materials. Furthermore, it has definite environmental advantages over competing electricity generation technologies, and the PV industry follows a pro-active life-cycle approach to prevent future environmental damage and to sustain these advantages. An issue with potential environmental implications is the decommissioning of solar cell modules at the end of their useful life, which is expected to about 30 years. A viable answer is recycling or re-used them in some ways when they are no longer useful, by implementing collection/recycling infrastructure based on current and emerging technologies. Some feasibility studies show that the technology of end-of-life management and recycling of PV modules already exists and costs associated with recycling are not excessive. In particular, Photovoltaic is a friendly and an excellent alternative to meet growing global energy-demand by producing clean and sustainable electricity that can replace conventional fossil fuels and thus reducing the negative greenhouse effects (see section 1.1). Reasoning from this fact, solar cell specialists have been contributing to the development of advanced PV systems from a costly space technology to affordable terrestrial energy applications. Actually, since the early 1980s, PV research activities have been obtaining significant improvements in the performance of diverse photovoltaic applications. A new generation of low-cost products based on thin films of photoactive materials (e.g., amorphous silicon, copper indium diselenide (CIS), cadmium telluride (CdTe), and film crystalline silicon) deposited on inexpensive substrates, increase the prospects of rapid commercialization. In particular, the photovoltaic industry has focused on the development of feasible and high-efficiency solar cell devices by using accessible semiconductor materials that reduce production costs. Nonetheless, photovoltaic applications must improve their performance and market competitiveness in order to increase their global install capacity. In this context, the design of innovative solar cell structures along with the development of advanced manufacturing processes are key elements for the optimization of a PV system. Nowadays, TCAD modeling is a powerful tool for the analysis, design, and manufacturing of photovoltaic devices. In fact, the use of a properly calibrated TCAD model allows investigating the operation of the studied solar cells in a reliable and a detailed way, as well as identifying appropriate optimization strategies, while reducing costs, test time and production. Thereby, this Ph.D. thesis is focused on a research activity aimed to the analysis and optimization of solar cells with Interdigitated Back Contact (IBC) crystalline silicon substrate c-Si, also known as Back Contact-Back Junction (BC-BJ). This type of solar cell consists of a design where both metal contacts are located on the bottom of the silicon wafer, simplifying the cell interconnection at module-level. Characteristics that guarantee high-conversion efficiency due to the absence of front-contact shadowing losses. In particular, the main purpose of this thesis is to investigate the dominant physical mechanisms that limit the conversion efficiency of these devices by using electro-optical numerical simulations. Three-dimensional (3D) TCAD-based simulations were executed to analyze the performance of an IBC solar cell featuring point-contacts (PC) as a function of the metallization fraction. This scheme was also compared with a similar IBC structure featuring linear-contacts (LC) on the rear side of the device. In addition, the impact of introducing a selective emitter scheme (SE) in the PC cell was evaluated. The analyses were carried out by varying geometric and/or process parameters (for example, the size and shape of metalcontacts, doping profiles, carrier lifetime, and recombination rates). This approach provides a realistic and an in-depth view of the behavior of the studied IBC solar cells and also furnishes with useful information to optimize the architecture design of the device in order to enhance the conversion efficiency and minimize production costs.Item Design og high-efficiency crystalline silicon solar cells based on numerical simulation(2017-02-13) Procel Moya, Paul Alejandro; Pantano, Pietro; Cocorullo, Giuseppe; Crupi, FeliceL’utilizzo di strumenti di simulazione è diventato un approccio chiave nel processo di progettazione di celle solari ad alta efficienza. In questo lavoro di tesi, strutture e tecnologie relative a celle solari avanzate in silicio cristallino sono state discusse e analizzate per mezzo di simulazioni numeriche. In particolare, i parametri critici sono stati evidenziati fornendo linee guida per ottenere la massima efficienza in relazione ai vincoli tecnologici. Nel Capitolo 1 è stata presentata l’evoluzione delle celle in c-Si effettuata con l’obiettivo di avvicinarsi il più possibile agli effettivi limiti di efficienza.. Nel Capitolo 2, è stato descritto lo stato dell’arte generale relativo alle celle in silicio cristallino, focalizzandosi sulla loro implementazione in simulazioni numeriche. Di seguito, nel Capitolo 3, è stato presentato uno studio teorico dell’impatto dei parametri di progettazione sulle principali figure di merito di celle solari IBC in c-Si, basato su simulazioni elettro-ottiche. Lo studio è stato condotto analizzando i principali parametri e identificando i meccanismi dominanti che migliorano o degradano l’efficienza di conversione. In particolare, è stato dimostrato che le concentrazioni di drogaggio e le geometrie della faccia inferiore ottimali sono il risultato di compromessi tra meccanismi di ricombinazione intrinseci ed estrinseci, nel caso dei drogaggi, e tra maccanismi di trasporto e ricombinazione, nel caso delle geometrie posteriori. Successivamente, l’approccio presentato nel Capitolo 2 è stato ampliato nel Capitolo 4, in cui è stato illustrato un innovativo modello di simulazione per IBC. La simulazione elettro-ottica è stata validata e impiegata per lo studio della regione frontale della cella back-contact. La nuova metodologia di simulazione modella in dettaglio il comportamento ottico e i meccanismi di passivazione sulla texturizzazione frontale. I risultati ottenuti hanno mostrato che un’interfaccia frontale texturizzata con piramidi irregolari e un FSF ottimale sono necessari per minimizzare sia le perdite ottiche che per ricombinazione. Analogamente, è stato evidenziato che le perdite per ricombinazione sono influenzate in misura maggiore dal profilo di drogaggio che dalla rugosità delle superficie. In relazione all’ottimizzazione del regione inferiore è stato ottenuto un miglioramento del 1% nell’efficienza assoluta e, in conseguenza di questo, migliorando sia la qualità dell’emettitore che della base in silicio cristallino, è stata presentata una cella solare con efficienza del 22.84%. Nel Capitolo 5, il modello di simulazione è stato usato per analizzare parametri critici di progettazione nell’applicazione di contatti passivanti in un cella solare convenzionale. I risultati delle simulazioni hanno dimostrato che i parametri principali che limitano il meccanismo di trasporto sono l’energia di barriera, le masse di tunneling di elettroni e lacune e lo spessore dell’ossido. Inoltre, è stato riscontrato che il comportamento del potenziale di built-in è correlato all’allineamento delle bande. Questo effetto fornisce la comprensione di come il silicio cristallino con drogaggio internamente diffuso supporta il trasporto per mezzo di tunneling attraverso lo strato di ossido. In accordo con le analisi svolte, sono state fornite delle indicazioni per la progettazione di contatti passivanti. In conclusione, in questo lavoro di tesi sono state fornite linee guide per il design di celle solari IBC e celle solari convenzionali con contatti passivanti, con lo scopo di favorire processi di fabbricazione di celle solari in silicio cristallino ad alta efficienza.Item Discovering the world city: from texts' analysis to 3D scenes visualization(2019) Bova, Valentina; Guarasci, Roberto; Crupi, FeliceItem Distributed Model Predictive Control Strategies for Constrained Multi-Agent Systems Moving in Uncertain Environments(Università della Calabria, 2021-09-17) Babak, Rahmani; Franzè, Giuseppe; Crupi, FeliceItem Distribution, Reuse and Interoperability of simulation models in heterogeneous distributed computing environments(2017-07-26) Falcone, Alberto; Garro, Alfredo; Crupi, FeliceModeling and Simulation (M&S) is gaining a central role in several industrial domains such as automotive, e-science and aerospace, due to the increasing complexity of system requirements and thus of the related engineering problems. Specifically, M&S methods, tools, and techniques can e↵ectively support the analysis and design of modern systems by enabling the evaluation and comparison of di↵erent design choices against requirements through virtual testing; this opportunity becomes even crucial when complete and actual tests are too expensive to be performed in terms of cost, time and other resources. Moreover, as systems result from the integration of components which are often designed and manufactured by di↵erent organizations belonging to di↵erent engineering domains (including mechanical, electrical, control, and software), great benefits can derive from the possibility to perform simulations which involve components independently developed and running on di↵erent and possibly geographically distributed machines. Indeed, distributed simulation promotes an e↵ective cooperative, integrated and concurrent approach to complex systems analysis and design. Although M&S o↵ers many advantages related to the possibility of doing controlled experiments on an artificial representation of a system, its practical use requires to face with important issues such as, (i) difficulties to reuse simulation models already made; (ii) lack of rules and procedures by which to make interoperable models created with di↵erent simulation environments; and, (iii) lack of mechanisms for executing simulation models in distributed and heterogeneous environments. Indeed, there are di↵erent simulation environments both commercial and noncommercial highly specialized that allow the design and implementation of simulation models in specific domains. However, a single simulation environment is not able to manage all the necessary aspects to model a system when it is composed of several components. Typically, the modeling and simulation of such systems, whose behavior cannot be straightforwardly defined, derived and easily analyzed starting from the behavior of their components, require to identify and face with some important research issues.Item Dual mode logic-based design of variable-precision arithmetic circuits(2019-06-20) Romeo Riera, Paul Patricio; Crupi, Felice; Lanuzza, MarcoThe ever growing technological progress has an unquestionable impact on our society and, with the recent emergence of innovative technological paradigms, such as Internet of Things (IoT), Artificial Intelligence (AI), Virtual Reality (VR), 5G, Edge Computing, etc, it is expected that it will take a more and more dominant role in the coming decades. Obviously, the full development of all these new technologies requires the design of specialized hardware to faithfully and efficiently implement specific applications and services. In this sense, the demand of electronic circuits and systems with small area, flexible processing capability, high performance, and low energy consumption, has recently become one of the major concerns in different research areas, such as computing, communications, automation, etc. In this context, this thesis work entitled "DUAL MODE LOGIC-BASED DESIGN OF VARIABLE-PRECISION ARITHMETIC CIRCUITS" aims to provide a contribution in the research of new design solutions for energy-efficient computing platforms, while also keeping high performance. In this regard, several strategies can be explored at different design abstraction levels, from system-level down to device-level. Among these, the design of variable-precision arithmetic circuits is a well-known approach to achieve more energy-efficient computing platforms when dealing with lossy multimedia applications (e.g., audio/video/image processing) where a reduction of the operation precision can be typically tolerated under the acceptable accuracy loss. At the same time, other solutions can be implemented at both circuit- and logic-level. In this regard, a new logic 8 family, namely Dual Mode Logic (DML), has recently emerged as an alternative design methodology to the existing digital design techniques. It was originally proposed as a combination of CMOS static and dynamic logics to allow on-the-fly controllable switching at the gate level between static and dynamic operation modes according to system requirements, input-driven control, and/or by designer considerations. Such modularity typically offers greater performance/energy trade-off flexibility in the design and optimization of digital circuits, especially for applications with a flexible workload, such as in multi-precision arithmetic circuits. In this thesis work, the benefits of the DML design approach with respect to the standard CMOS style are first highlighted on a flexible circuit benchmarck, consisting of 10 levels of 11-stage NAND/NOR chains. In this case, the DML implementation takes advantage of its capability that allows operating in a combined (mixed) mode, i.e. working at the same time partly statically and partly dynamically, thus leading to fully exploit the benefits of the two DML operation modes for better energy-performance trade-offs. Then, the flexibility inherently offered by the DML is exploited to design a double-precision (8×8-bit or 16×16-bit) carry-save adder (CSA)-based array multiplier with the aim of demonstrating the potential in combining the two aforementioned design solutions (i.e., multi-precision computing and DML methodology) in the design and optimization of arithmetic circuits. As a matter of fact, the DML dual operation ability is potentially very attractive to efficiently trade performance and energy consumption between the operations at different precisions in on-demand multi-precision digital circuits. This occurs in the proposed DML multiplier working in a mixed operation mode, i.e., by employing the DML static and dynamic mode for lower- and higher-precision operations, respectively. On one hand, the use 9 of the dynamic mode for higher-precision operations ensures higher performance as compared to its standard static CMOS counterpart (16% gain on average) at the cost of higher energy consumption. On the other hand, such energy penalty is counterbalanced at lower-precision operations for which the static mode is enabled in the DML circuit. Overall, the adoption of the mixed operation mode in the proposed DML multiplier proves to be beneficial to achieve a better performance/energy trade-off with respect to the standard static CMOS implementation and to the cases when using the DML static or dynamic mode for both operations at the two different precisions. When compared to its CMOS counterpart, the proposed DML design operating in the mixed mode exhibits an average improvement of 15% in terms of energy-delay product (EDP) under wide-range supply voltage scaling. Such benefit is maintained over process-voltage-temperature (PVT) variations.Item Dynamic argumentation in artificial intelligence(Università della Calabria, 2020-04-20) Alfano, Gianvincenzo; Crupi, Felice; Greco, Sergio; Parisi, FrancescoL’argumentation è una tematica di grande rilievo che si è distinta nel vasto mondo dell’Intelligenza Artificiale. Un sistema di argomentazione, adottando un particolare framework, riesce a gestisce discussioni tra agenti software e prendere decisioni in maniera autonoma su temi per cui si sta argomentando. Stabilire il modo in cui le decisioni vengono prese corrisponde a stabilire una semantica di argomentazione. Tali semantiche, godono di un alto costo computazionale, e pertanto, a seguito dell’aggiunta di nuove argomentazioni, nasce il problema di dover ricalcolare le decisioni (chiamate estensioni) sull’intero framework aggiornato. Sebbene i limiti computazionali e gli algoritmi per la valutazione di framework di argomentazione sono stati largamente studiati in letteratura, queste ricerche si basano su framework di tipo statico, ovvero framework di argomentazione che non subiscono aggiornamenti, nonostante in pratica i sistemi di argomentazione modellino un processo altamente dinamico quale è l’argomentazione. Lo scopo di questa tesi è di produrre algoritmi incrementali efficienti che risolvano i problemi principali sia dell’argumentation astratta (i cui argomenti rappresentano entità astratte), sia nel framework di argomentazione strutturato Defeasible Logic Programming (DeLP), i cui argomenti hanno un’esplicita struttura poiché derivano da una knowledge-base (un programma DeLP) contenente fatti, regole certe (strict) e regole incerte (defeasible). Di fronte alle modifiche sul grafo sottostante (nel caso di argomentazione astratta) o sul programma DeLP (nel caso di argomentazione strutturata), estensioni precedentemente calcolate sono parzialmente riutilizzate al fine di evitarne il ricalcolo da zero. La tesi fornisce diversi contributi sia teorici che pratici. In particolare, dopo aver analizzato i concetti preliminari alla base dei principali frameworks di argomentazione astratta, nel Capitolo 3 viene proposto un approccio per il problema dell'enumerazione delle estensioni preferred e semi-stable di un framework di argomentazione astratto. Nel Capitolo 4 viene affrontato il problema del ricalcolo incrementale di un'estensione complete, preferred, grounded e stable per frameworks astratti. Fondamentalmente, dato un framework iniziale, una sua estensione ed un update, viene determinato l’insieme di argomenti influenzati dalla modifica, i quali costituiscono un sottoinsieme degli argomenti iniziali utili a determinare un framework ridotto su cui viene calcolata un'estensione. Combinando parte dell'estensione iniziale con quella calcolata sul framework ridotto, si ottiene un'estensione del framework aggiornato. Questo approccio viene esteso nel Capitolo 5 ai framework di argomentazione bipolari e con attacchi di secondo ordine, sfruttando una traduzione in framework astratti classici. Tale tecnica incrementale viene utilizzata nel Capitolo 6 per far fronte al calcolo incrementale dell’accettazione scettica di un argomento in accordo alla semantica preferred (ovvero stabilire se un argomento è contenuto in tutte le estensioni preferred), sfruttando la relazione tra le semantiche preferred e ideal. L’idea e le motivazioni alla base della tecnica incrementale proposta nel Capitolo 4 sono state sfruttate nel Capitolo 7 per affrontare il problema del ricalcolo incrementale dello stato dei letterali di un programma DeLP a seguito dell’aggiunta o rimozione di regole. Infatti, dopo aver mostrato che il problema risulta essere NP-hard, viene presentato un algoritmo incrementale basato su un ipergrafo che codifica le relazioni di dipendenza tra letterali sulla base delle regole che formano il programma DeLP, al fine di individuare la porzione del programma influenzata dalla modifica che necessita del ricalcolo. Tutti gli algoritmi proposti sono stati analizzati sperimentalmente, mostrando miglioramenti significativi rispetto al corrispondente calcolo da zero.Item Efficient incremental algorithms for handling graph data(2017-11-13) Quintana Lopez, Ximena Alexandra; Crupi, Felice; Greco, Sergio;
- «
- 1 (current)
- 2
- 3
- »