Dipartimento di Ingegneria dell'Ambiente - Tesi di Dottorato
Permanent URI for this collectionhttp://localhost:4000/handle/10955/101
Questa collezione raccoglie le Tesi di Dottorato afferenti al Dipartimento di Ingegneria per l'Ambiente e il Territorio e Ingegneria Chimica dell'Università della Calabria.
Browse
Item Accuracy aspects in flood propagation studies due to earthfill dam failures(2015-10-30) Razdar, Babak; Costabile, Pierfranco; Costanzo, Carmelina; Macchione, FrancescoFlooding due to dam failing is one of the catastrophic disasters which might cause significant damages in the inundated area downstream of the dam. In particular, there is a need of trustworthy numerical techniques for achieving accurate computations, extended to wide areas, obtained flood mapping and, consequently, at the implementation of defensive measures. In general several key aspects are required for accurate simulations of flood phenomena which are ranging from the choice of the mathematical model and numerical schemes to be used in the flow propagation to the characterization of the topography, the roughness and all the structures which might interact with the flow patterns Regarding general framework discussed before this thesis is devoted to discuss two aspects related to accuracy issues in dam breach studies. In the first part a suitable analytical relation for the description of reservoir have been discussed and the second part the influence exerted by the methods used for computing the dam breach hydrograph on the simulated maximum water levels throughout the valley downstream of a dam, has been investigated. As regards the first aspect, the influence of reservoir morphology on the peak discharge and on the shape of outflow hydrograph have been investigated in the literature. The calculation of the discharge released through the breach requires the knowledge of the water level in the reservoir. It is considerable that the reservoir morphology in computational analyses cannot be expressed exactly by an analytical formula because of natural topography of the reservoir. For this reason, the information about reservoir morphology is usually published as a detail tables or plots which each value of elevation from bottom to top has a corresponding value for lake surface and reservoir volume. However, in the cases for which there is a scarcity of data, analytical expression can be obtained by interpolation of the values of the table. Usually one of the most suitable technique for interpolation data is using polynomial function but unfortunately utilizing this function for solving the problem demand several parameters. Using power function in numerical computations of breach phenomena would be advantageous, because this function is monomial type and only one parameter needs to be estimated. In this thesis, we want to present that this approach is very accurate and suitable to represent the morphology of the reservoirs, at least for dam breach studies. To reach this aim, 97 case studies have been selected from three different geographical regions in the world. The results of this research have been shown that the power function is suitable to obtain an accurate fitting of the reservoir rating curve using a very limited number of surveyed elevations and volumes or areas. Furthermore in this part of the research it has been shown that two points are enough for a good fitting of the curve, or even only one if volume and surface are both available for an elevation close to normal or maximum pool. Results obtained for dam breach calculations using this equation, have the same quality of those achieved using the elevation-volume table. Moreover, this research have been shown that the exponent of power equation can be expressed by a formula which has a precise morphological meaning, as it represents the ratio between the volume which the reservoir would have if it were a cylinder with its base area and height equal to the respective maximum values of the actual reservoir, and the real volume of the reservoir. Regarding the second aspect, over the complexity of the mathematical models which have been used to predict the generation of dam breach hydrograph, it is considerable that the historical observed data of discharge peak values and typical breach features (top width, side slope and so on) have been usually utilize for model validation. Actually, the important problem which should be considered here is traditionally focused on what has been observed in the dam body, because the effects of the flood wave realized in the downstream water levels usually have been neglected. This issue seems considerable because required information for the civil protection and flood risk activities are represented by the consequences induced by the flood propagation on the areas downstream such as maximum water levels and maximum extent of flood-prone areas, flow velocity, front arrival times etc. The water surface data is almost never linked to the reservoir filling/emptying process which can be important information for the estimation of discharge coming from the breach, are available. Moreover, it is quite unusual to have records on the flood marks signs or other effects induced on the river bed, or on the man-made structures, downstream. For this reason finding well documented case study is one of the important part of any simulation study, especially for model validation. One of the few cases in this context is represented by the Big Bay dam, located in Lamar County, Mississippi (USA), which experienced a failure on 12 March 2004. In general analyzing the simplified models for dam breach simulation is the main purpose of this second important activity of the thesis. The simplified model have been utilized in this study, in order to identify a method that, on the basis of the results obtained in terms of simulated maximum water levels downstream, might effectively represent a preferential approach for its implementation not only in the most common propagation software but also for its integration in flood information systems and decision support systems. For the reasons explained above, attention here focuses on the parametric models, widely used for technical studies, and on the Macchione (2008) model, whose predictive ability and ease of use have been already mentioned. To reach this purpose both a 1-D and 2-D flood propagation modelling have been utilizing in this study. The results show that the Macchione (2008) model, without any operations of ad hoc calibration, has provided the best results in predicting computation of that event. Therefore it may be proposed as a valid alternative for parametric models, which need the estimation of some parameters that can add further uncertainties in studies like these.Item Air pollution across the mediterranean basin:modelling,measurements and policy implications(2010-12-02) Bencardino, Mariantonia; Iorio, G.; Pirrone, Nicola; Sprovieri, FrancescaItem Analisi dei flussi energetici per la stima dell’evapotraspirazione attraverso tecniche di telerilevamento satellitare(2007-10-30) Calcagno,Giovanni; Veltri, Paolo; Mendicino, GiuseppeItem Analisi su scala particellare dell'interazione fluido-solido in letti fluidizzati polidispersi e simulazioni in ambiente parallelo(2014-05-12) Cello, Fernando; Di Maio, Francesco Paolo; Di Renzo, AlbertoItem Analysis of membrane reactor integration in hydrogen production process(2014-11-11) Mirabelli, Ilaria; Drioli, Enrico; Barbieri, Giuseppe; Molinari, RaffaeleIn the H2 production field, the membrane reactor (MR) technology is considered a promising and interesting technology. In this thesis work the integration in a small scale hydrogen generator of an MR, to carry out the water gas shift reaction (WGS), has been studied. In particular, the effect of MR integration from a systems perspective, i.e. specifically assessing the impact of MR on the whole process, has been investigated. A preliminary design of a pilot scale MR to produced 5 Nm3/h of H2 by reformate stream upgrading has been performed. A CO conversion of 95% and an hydrogen recovery yield of 90% have been fixed as minimum performance target of the WGS-MR. Depending on the system considered to promote the driving force for the permeation, three scenarios have been proposed: base, vacuum and sweep scenario. On the basis of results from a preliminary scenario screening, the required membrane area (ca. 0.179 m2), for vacuum and sweep scenarios, has been estimated by means of an MR modelling and simulation. The results obtained from the pilot scale have been used for the scale-up of the WGS-MR integrated in the 100 Nm3/h hydrogen production unit. The plant for the integrated process (reformer and WGS-MR) has been simulated by using the commercial simulation tool Aspen Plus®. The MR integration, actually, implies a re-design of the process downstream the WGS reactor. Since more than 90% of the produced H2 is directly recovered in the permeate stream, the PSA unit can be removed, leading to a more compact system. For the retentate stream post processing, the possibility to recover the CO2, by means of membrane gas separation technology has been proposed. The results for a two stages membrane separation unit confirmed the technological feasibility of the CO2 capture, achieving the CO2 purity target. Pursuing the logic of process intensification, the comparison with the reference technology (reformer, high temperature shift, PSA) showed as the WGS-MR integrated system results in a more “intensified” process since a higher H2 productivity, a smaller plant and an enhanced exploitation of raw materials are obtained. In addition, since the MR delivers a high-pressure CO2-rich stream, it provides an opportunity for small-scale CO2 capture and thus possible emission reduction. The possibility to extend the spectrum of MR application in reactions of industrial interest, where hydrogen is produced as by-product, has been also studied. In particular, as case study, the direct conversion of n-butane to isobutene has been analysed showing as, from a thermodynamic point of view, better performance (equilibrium conversion up to seven times higher than the one of a traditional reactor) can be obtained.Item Applicazione della distillazione a membrana a processi di interese industriale(2009-11-13) Carnevale, Maria Concetta; Drioli, Enrico; Criscuoli, Alessandra; Molinari, RaffaeleItem Applicazione della shallow water equations per la simulazione numerica a scala di bacino degli eventi alluvionali(Università della Calabria, 2020-04-16) Gangi, Fabiola; Critelli, Salvatore; Macchione, Francesco; Costanzo, CarmelinaLa valutazione del rischio idraulico connesso alle piene dei corsi d’acqua è particolarmente delicata quando gli eventi alluvionali hanno carattere impulsivo, come accade nei bacini di modeste dimensioni. L’approccio correntemente utilizzato per l’analisi idraulica è quello di individuare dei singoli tratti di interesse dei corsi d’acqua. L’analisi è condotta sulla base di idrogrammi di progetto ricavati mediante modelli idrologici del tipo afflussi-deflussi. In questa memoria sarà invece applicato un approccio basato sull’analisi degli effetti idraulici provocati da un evento meteorico considerando come dominio per il calcolo idraulico l’intero bacino idrografico. Tale approccio è in grado di individuare situazioni di pericolo in zone che magari non sarebbero state esaminate. L’uso di modelli idrodinamici basati sulle shallow water equations, è diventato oggetto di crescente interesse per simulare eventi a scala di bacino. Un fattore che può essere limitante ai fini dell’ottenimento di risultati conseguibili con il dettaglio fisico garantito dalle SWEs è la dimensione delle celle di calcolo. Questa deve essere sufficientemente piccola da garantire un’accurata simulazione degli effetti idraulici e contestualmente non troppo piccola per non rendere proibitiva la mole dei calcoli su domini estesi. In questa ottica, il presente lavoro propone di occuparsi dell’individuazione dei criteri per la delimitazione delle aree a pericolosità idraulica definendo la più grande dimensione che può essere assegnata alla cella di calcolo per ottenere risultati sufficientemente affidabili. A tal fine, un modello numerico basato sulle SWE, sviluppato dagli autori e parallelizzato utilizzando le direttive OPENMP e MPI, è stato applicato al bacino del fiume Beltrame, collocato sulla costa Est della Calabria. Il torrente Beltrame, come altri torrenti della fascia ionica calabrese, è stato interessato, in passato, da eventi alluvionali di notevoli dimensioni. Si prenderà qui in esame l’evento accaduto il 10 settembre 2000. La risoluzione dei dati topografici a disposizione è variabile. Il 39% ha una copertura di dati DTM a risoluzione 5 metri, il 59% ha copertura di dati LiDAR a risoluzione 1 metro e l’2% ha copertura di dati LiDAR a risoluzione 2 metri. A partire dai dati topografici, sono stati generati quattro domini computazionali con griglie di tipo non strutturato, uniforme, con elementi triangolari (con area variabile da 36 a 900 m2). Le differenze tra i risultati ottenuti sono stati confrontati in termini di estensione di aree allagate e distribuzione dei valori della pericolosità all’interno delle aree perimetrate, quest’ultima quantificata secondo il prodotto hV, dove h è la profondità della corrente in un assegnato punto e in un assegnato istante e V è la contestuale velocità. La valutazione della sovrapponibilità delle aree per ciascuna classe di pericolosità è stata eseguita utilizzando diversi indici quali: Hit Rate, False Alarm Ratio, Critical Success Index. L’analisi condotta nella presente memoria ha messo in luce che, a scala di bacino, gli errori sui massimi tiranti crescano significativamente al crescere delle dimensioni delle celle di calcolo, sebbene essi si mantengano più contenuti, anche usando le griglie più grossolane, per la parte valliva, caratterizzata da estensioni più ampie dell’area allagata. In ogni caso sembra che questo abbia una scarsa ricaduta sulla valutazione della pericolosità. I calcoli e i confronti hanno mostrato che le aree a diversa pericolosità si distribuiscono all’interno dell’area del bacino in maniera simile. Inoltre, anche se non si arriva ad una perfetta sovrapposizione areale, esse sono collocate spazialmente in modo che o si sovrappongono parzialmente o, se sono delle strisce sottili, hanno dislocazioni molto prossime le une alle altre. si ritiene che anche con la griglia più grossolana si possa impiantare una buona analisi della pericolosità a scala di bacino, certamente con precisione maggiore andando dai rami montani del reticolo – più stretti - a quelli più ampli che provocano esondazioni in zone vallive.Item Approccio metodologico per la valutazione modulare della vulnerabilità finalizzata alla riduzione dei rischi naturali antropici 2021(Università della Calabria, 2021-05-10) Maletta, Roberta; Critelli, Salvatore; Mendicino, GiuseppeVulnerability is an important component of risk assessment and represents the main element in the risk perception. Typically, the characteristics related to social, cultural, physical and institutional factors increase the susceptibility of an individual or a community, to the impacts of hazards. Vulnerability is described as a dynamic phenomenon that can vary significantly across time and space; it is greatly influenced by human actions and behaviors, by the emergency response related to road accessibility. As a consequence, there is a continuing need for risk reduction disaster strategies to shift attention from assessing hazard events toward reducing vulnerabilities within social systems. Describing and quantifying vulnerability is an important challenge along this path. Our current understanding of vulnerability is guided by methodologies, indicators and measurement standards derived from different schools of thought. This thesis presents a methodological approach to describe and to assess the vulnerability index at the inter-municipal scale, using three indices. Spatial analysis is conducted on the basis of census zones in an area defined as “Territorial Context” (TC) characterized by the union of municipalities. A measure of modular vulnerability is evaluated on the basis of inductive methods. Vulnerability is defined as the conditions determined from social and economic factors from human and climatic territorial pressures, from critical issues generated by past events and from the functioning of road infrastructures during an event. The three modular components of the vulnerability are: TCVIpeople (Territorial Context Vulnerability index-people); TCVIexposure (Territorial Context Vulnerability index-exposure); and TCVIemergency (Territorial Context Vulnerability index-emergency). Thirty-eight variables are selected and geoprocessed for each of the 195 census analysis units in the Mediterranean study area of southern Italy. Using Principal Components Analysis (PCA) with varimax rotation and Kaiser criterion for component selection, the social and territorial vulnerability index, are identified. The third vulnerability index, TCVIemergency, is processed through the transport modeling technique. In the latter case, a contextual interruption of all road network exposed to the highest level of hazard is assumed. Models are implemented to assess the forest fire, flood and landslide hazards. The TCVIemergency index is calculated (on the basis of the differences in travel time, after and before the event, from the origin (centroids of the census areas) to the destination points (strategic buildings in emergency planning and civil protection operational structures), using the shortest paths network model. This index can provides useful information for evacuation planning and rescue operation during emergency situations. A fuzzy logic model is used to evaluate the vulnerability classification, while the fuzzy overlay function is used to calculate the final aggregate TCVI index. The performance of classification models is measured by some statistical metrics. A dedicated Geographic Information System (GIS) is used to capture, geo-process and display spatial data recorded at different scales. The GIS technology allows to evaluate and visualize the results, through maps, as a realistic representation and to identify and manage the process. The results contribute to debates in contemporary literature on vulnerability in many ways. First these analyses constitute an attempt to quantify and mapping vulnerability at census area in a natural or handmade scenario. Secondly new variables like the road network representing the category most damaged during the events, with the greatest repercussions on the community and on the economy, are introduced. The current methods of vulnerability assessment are in fact mainly based on social aspects, the built environment and climatic factors, leaving out the importance of the road infrastructure. The model is developed at the census area, which is the smallest geographic unit that the National Institute of Statistics uses to aggregate demographic data, in an inter-municipal area. It is well known that vulnerability is a scale-dependent variable and it could be very accurate for larger spatial scales than the TC area. Moreover, new classification criteria for vulnerability maps are investigated, using fuzzy set theories. Finally, working with the territorial contexts TC, a new approach for risk reduction is defined, in order to better meet the needs of the Civil protection activities. This is the first national attempt to calculate the spatial distribution of vulnerability in a territorial context functional to emergency planning. Through this study, a comprehensive understanding of the relative driving components contributing to the overall vulnerability is achieved. Results show significant differences in the spatial distribution of the social vulnerability, highlighting the multidimensionality and heterogeneity of the municipal characteristics. The TCVI in the southern and central part of TC is higher than that its northern and western parts. In general, by analyzing the results of the vulnerability values it must be noted that about 56% of census areas are characterized by low and low-medium, while 35 % fall into categories labelled with high, very high and the remaining 9% falls into the moderate vulnerability category. The vulnerability maps provide useful territorial information, that can support policy-makers for prevention and emergency management. Within the context of natural and handmade hazards, the TCVI could be used to manage the repartition of resource, helps to determine which places may need specialized attention during immediate response and long-term recovery after an extreme event. It can provide an indication of the housing areas that need development and humanitarian aids and can provide guidance for better preparedness, response and mitigation strategies. The vulnerability maps can also be used as guidance to road administrations in the planning and in their investment to prioritize interventions and for normal maintenance and control activities. Actions and emergency measure are directly connected with resilience, then this work can help to strong intent to increase capacity building of human resources, better land use management, increasing preparedness and emergency measures that are taken during and after event. Following the introduction section, the present study is composed by two main sections that delve into: 1) conceptual frameworks for vulnerability and hazards assessments. This is accomplished by discussing the relevant primary research literature and analyzing the events recorded in the past; 2) methodological approaches to model natural and anthropic hazards and for vulnerability measuring in a Territorial Context. An application in the Territorial Context of Marina of Gioiosa Ionica in Southern Italy, is developed. Finally, the last section presents the main conclusions of the study and potential developments. Keywords: forest fire, landslide and flood hazard, vulnerability index, territorial context, indices and maps, social and territorial vulnerability, road susceptibility.Item Aree interne, processi innovativi per le comunità emergenti. Strategie e tattiche di rural making negli iti denominati presila catanzarese, Reventino-Savuto e area Grecanica(2019-05-10) Mangano, Giuseppe; Nava, Consuelo; Rossi, Franco; Giordano, GirolamoItem Automated valuation methods in un sistema informativo immobiliare pilota(2014-05-13) De Reggiero, Manuela; Salvo, Francesca; Festa, DemetrioItem Bio-Hybrid Membrane Process for Food-based Wastewater Valorisation: a pathway to an efficient integrated membrane process design(2014-11-11) Gebreyohannes, Abaynesh Yihdego; Giorno, Lidietta; Curcio, Efrem; Aimar, Pierre; Vankelecom, Ivo F.J.; Molinari, RaffaeleThe food industry is by far the largest potable water consuming industry that releases about 500 million m3 of wastewater per annum with very high organic loading. Simple treatment of this stream using conventional technologies often fails due to cost factors overriding their pollution abating capacity. Hence, recently the focus has been largely centered on valorization through combinatorial recovery of valuable components and reclaiming good quality water using integrated membrane process. Membrane processes practically cover all existing and needed unit operations that are used in wastewater treatment facilities. They often come with advantages like simplicity, modularity, process or product novelty, improved competitiveness, and environmental friendliness. Thus, the main focus of this PhD thesis is development of integrated membrane process comprising microfiltration (MF), forward osmosis (FO), ultrafiltration (UF) and nanofiltration (NF) for valorization of food based wastewater within the logic of zero liquid discharge. As a case study, vegetation wastewater coming from olive oil production was taken. Challenges associated with the treatment of vegetation wastewater are: absence of unique hydraulic or organic loadings, presence of biophenolic compounds, sever membrane fouling and periodic release of large volume of wastewater. Especially presence of biophenolic compounds makes the wastewater detrimental to the environment. However, recovering these phytotoxic compounds can also add economic benefit to the simple treatment since they have interesting bioactivities that can be exploited in the food, pharmaceutical and cosmetic industries. The first part of the experimental work gives special emphasis on the development of biohybrid membranes used to control membrane fouling during MF. Regardless of 99% TSS removal with rough filtration, continuous MF of vegetation wastewater using 0.4 μm polyethelene membrane over 24 h resulted in continuous flux decline. This is due to sever membrane fouling mainly caused by macromolecules like pectins. To overcome the problem of membrane fouling, biocatalytic membrane reactors with covalently immobilized pectinase were used to develop self-cleaning MF membrane. The biocatalytic membrane with pectinase on its surface gave a 50% higher flux compared to its counterpart inert membrane. This better performance was attributed to simultaneous in-situ degradation of foulants and removal of hydrolysis products from reaction site that overcome enzyme product inhibition. Although the biocatalytic membrane gave a better performance, its fate is disposal once the covalently immobilized enzyme gets deactivated or oversaturated with foulants. To surmount this problem a new class of superparamagnetic biochemical membrane reactor was developed, verified and optimized. This development is novel for its use of superparamagnetic nanoparticles both as support for the immobilized enzyme and as agent to render the membrane magnetized. This reversible immobilization method was designed to facilitate the removal of enzyme during membrane cleaning using an external magnet. Hence PVDF based organic-inorganic (O/I) hybrid membrane was prepared using superparamagnetic nanoparticles (NPSP) as inorganic filler. In parallel, superparamagnetic biocatalytic nanocomposites were prepared by covalently immobilizing pectinase on to the surface of NPSP dispersed in aqueous media. The synergetic magnetic responsiveness of both the O/I hybrid membrane and the biocatalytic particle to an external magnetic field was later on used to physically immobilize the biocatalytic particles on the membrane. This magnetically controlled dynamic layer of biocatalytic particles prevented direct membrane-foulant interaction, allowed in-situ degradation, easy magnetic recovery of the enzyme from the surface of the membrane, use of both membrane and immobilized enzyme over multiple cycles and possibility of fresh enzyme make up. The system gave stable performance over broad range of experimental condition: 0.01-3 mg/mL foulant concentration, 1-9 g per m2 of membrane area bionanocomposites, 5- 45 L/m2.h flux and different filtration temperatures. Under condition of mass transfer rate prevailing reaction rate, the system gave upto 75% reduction in filtration resistance. After optimization of the different operational parameters, it also revealed no visible loss in enzyme activity or overall system performance, when 0.3 mg/mL pectin solution was continuously filtered for over two weeks. In addition, the chemical cleaning stability of the O/I hybrid membrane was studied under accelerated ageing and accelerated fouling conditions. The ageing caused change in the physicochemical characteristics and enhanced fouling propensity of the membrane due to step-by-step degradation of the polymeric coating layer of used NPSP. But 400 ppm NaOCl solution at pH 12 was found compatible; henceforth it was used to clean the membrane. Second major limitation identified during the treatment of vegetation wastewater is presence of large volume of wastewater that comes in short period following the harvest of olive fruit. To alleviate this problem, FO was investigated to concentrate the wastewater. This process is believed to be less energy demanding, suppose that draw solution does need to be regenerated, and with low foul propensity. By operating at 3.7 molal MgCl2 draw solution and 6 cm/s crossflow velocity, single-step FO resulted in an average flux of 5.2 kg/m2.h. and 71% volume concentration factor with almost complete retention of all the pollutants. Moreover, the system gave a stable performance over ten days when operated continuously. After FO, both NF and UF were used to fractionate the recovered biophenols from the concentrate streams of FO. Compared to polymeric UF membrane, ceramic NF gave better flux of 27 kg/m2.h at 200 L/h feed flow rate and 7 bar TMP. Finally, when FO was used as a final polishing step to recover highly concentrated biophenols from permeate of the UF; it gave an average flux of 5 kg/m2.h and VCF of 64%. In conclusion, a great success has been made in tackling the two most important challenges of vegetation wastewater valorisation using the concept of biohybridization and FO. The bioinspired NPSP provides strong evidence that magnetically controlled enzyme immobilization have an immense potential in membrane fouling prevention and paves a potential breakthrough for continuous wastewater filtration. By setting bio-inspired NPSP biocatalytic membrane reactor at the heart, it is possible to successfully use integrated membrane process for continuous valorisation of food based wastewater. In addition to fouling prevention, they open a new horizon for applications in localized biocatalysis to intensify performance in industrial production, processing, environmental remediation or bio-energy generation.Item Biodegradable polymeric membrane systems for tissue engineering applications(2013-11-12) Messina, Antonietta; De Bartolo, Loredana; Curcio, Efrem; Molinari, RaffaeleItem Biomateriali a base di silice per applicazioni innovative(2007-11-09) Carino, Ida; Aiello, RosarioItem Characterization of real aquifers using hydrogeophysical measurements. An application to the chambo aquifer (Ecuador)(2014-10-29) Mendoza Trujillo, Benito Guillermo; Macchione, Francesco; Straface, SalvatoreItem Charged-particle distributions and material measurements in ps = 13 TeV pp collisions with the ATLAS Inner Detector(2017-07-14) Cairo, Valentina Maria Martina; Pantano, Pietro; Dell'Acqua, Andrea; Schioppa, MarcoThe Run 2 of the Large Hadron Collider, which began in Spring 2015, offers new challenges to the Experiments with its unprecedented energy scale and high luminosity regime. To cope with the new experimental conditions, the ATLAS Experiment was upgraded during the first long shutdown of the collider, in the period 2013-2014. The most relevant change which occurred in the ATLAS Inner Detector was the installation of a fourth pixel layer, the Insertable B-Layer, at a radius of 33 mm together with a new thinner beam pipe. The Pixel Services, located between the Pixel and SCT detectors, were also modified. Owing to the radically modified ID layout, many aspects of the track reconstruction programs had to be re-optimized. In this thesis, the improvements to the tracking algorithms and the studies of the material distribution in the Inner Detector are described in detail, together with the improvements introduced in the geometry model description in simulation as well as the re-evaluation and the reduction of the systematic uncertainty on the estimate of the track reconstruction efficiency. The results of these studies were applied to the measurement of Charged-Particle Multiplicity in proton–proton collisions at a centre-of-mass energy of 13 TeV. The chargedparticle multiplicity, its dependence on transverse momentum and pseudorapidity and the dependence of the mean transverse momentum on the charged-particle multiplicity are presented for various fiducial phase spaces. The measurements are corrected for detector effects, presented as particle-level distributions and are compared to the predictions of different Monte Carlo event generators. New sets of recommended performance figures along with the related systematic uncertainties were also derived for several aspects of the ATLAS tracking, such as track reconstruction efficiency, fake rate and impact parameter resolution. These recommendations provide information on appropriate working points, i.e. track selection criteria with wellunderstood performance. They apply to physics analyses using Inner Detector tracks in Run 2 data and are important inputs for other objects based on tracks, such as jets. A simulation-based method which uses the tracking recommendations to calibrate light-jets mis-tagged as b-jets it is also presented in the context of the measurement of the crosssection of the W-boson produced in association with b-jets at 13 TeV, together with an overview of the inclusiveW-boson cross-section analysis.Item Chemical looping desulphurization: model and applications to power systems(2016-02-26) Settino, Jessica; Molinari, Raffaele; Amelio, MarioI processi di assorbimento, sia sici che chimici a base di ammine, sono attualmente utilizzati per rimuovere e cacemente i composti dello zolfo. Nonostante l'eccellente desolforazione, questa strategia e termicamente ine ciente, in quanto richiede gas a bassa temperatura. Scopo di questo lavoro e quello di analizzare soluzioni alternative che operino a temperature pi u elevate. A tale scopo, e stato analizzato il processo del chemical looping. Si tratta di una nuova tecnologia, in cui un materiale sorbente, in contatto con il gas combustibile grezzo, viene convertito nel suo solfuro e poi rigenerato cos da ricominciare il ciclo. Il sistema e costituito da due reattori: uno per la rigenerazione e l'altro per la desolforazione. Un modello matematico di tale sistema e stato sviluppato con il software Athena Visual Studio ed i suoi risultati confrontati con quelli ottenuti dal modello proposto dal National Energy Technology Laboratory, validati sulla base di dati sperimentali. Nella fase successiva, il sistema modellato e stato applicato a tre casi studio di interesse industriale: per la produzione di energia elettrica negli impianti a ciclo combinato con gassi cazione integrata, nei processi di metanazione, nei processi per la sintesi del metanolo. Mediante simulazioni, condotte con i software commerciali Thermo ex e UniSim Design, sono stati studiati gli e etti della desolforazione a caldo sulle prestazioni dei diversi sistemi.Item Coherent structures of turbulence in wall-bounded turbulent flows(2011-10-24) Ciliberti, Stefania Angela; Macchione, Francesco; Alfonsi, GiancarloDirect Numerical Simulation (DNS) of a fully developed turbulent channel flow represents a powerful tool in turbulence research: it has been carried out to investigate the main characteristics of wall-bounded turbulence. It consists of solving numerically the Navier-Stokes equations with physically-consistent accuracy in space and time. The major difficulty in performing turbulence calculations at values of the Reynolds number of practical interest lies in the remarkable amount of computational resources required. Recent advances in high performance computing, especially related to hybrid architectures based on CPU/GPU, have completely changed this scenario, opening the field of High Performance Direct Numerical Simulation of turbulence (HPDNS), to which new and encouraging perspectives have been associated with the development of an advanced numerical methodology for studying in detail turbulence phenomena. The research activities related to the Ph. D. Program concerns the high performance direct numerical simulation of a wall-bounded turbulent flow in a plane channel with respect to the Reynolds number dependence in order to investigate coherent structures of turbulence in the wall region. The objectives of the research have been achieved by means the construction and the validation of DNS turbulent flow databases, that give a complete description of the turbulent flow. The Navier- Stokes equations that governs the flow of a three-dimensional, fully developed, incompressible and viscous fluid in a plane channel have been integrated and a computational code based on a mixed spectral-finite difference scheme has been implemented. In particular, a novel parallel implementation of the Navier-Stokes solver on GPU architectures have been proposed in order to perform simulations at high Reynolds numbers. In order to deal with large amount of data produced by the numerical simulation, statistical tools have been developed in order to verify the accuracy of the computational domain and describe the energetic budgets that govern the energy transfer mechanisms close to the wall. Flow visualization has been provided in order to identify and evaluate the temporal and morphological evolution coherent structures of turbulence in the wall region. The objectives of the research have been achieved by means the construction and the validation of DNS turbulent flow databases, that give a complete description of the turbulent flow. The Navier- Stokes equations that governs the flow of a three-dimensional, fully developed, incompressible and viscous fluid in a plane channel have been integrated and a computational code based on a mixed spectral-finite difference scheme has been implemented. In particular, a novel parallel implementation of the Navier-Stokes solver on GPU architectures have been proposed in order to perform simulations at high Reynolds numbers. In order to deal with large amount of data produced by the numerical simulation, statistical tools have been developed in order to verify the accuracy of the computational domain and describe the energetic budgets that govern the energy transfer mechanisms close to the wall. Flow visualization has been provided in order to identify and evaluate the temporal and morphological evolution threedimensional, fully developed, incompressible and viscous flow. The second part is devoted to the study of the numerical method for the integration of the Navier-Stokes equations. A mixed spectral-finite difference technique for the numerical integration of the governing equations is devised: Fourier decomposition in both streamwise and spanwise directions and finite difference method along the wall-normal direction are used, while a third-order Runge-Kutta algorithm coupled with the fractional-step method are used for time advancement and for satisfying the incompressibility constraint. A parallel computational codes has been developed for multicore architectures; furthermore, in order to simulate the turbulence phenomenon at high Reynolds numbers, a novel parallel computational model has been developed and implemented for hybrid CPU/GPU computing systems. The third part of the Ph. D. thesis concerns the analysis of numerical results, in order to evaluate the relationship between turbulence statistics, energy budgets and flow structures, allowing to increase the knowledge about wall-bounded turbulence for developing new predictive models and for the control of turbulenceItem Conglomerati cementizi a basso impatto ambientale confezionati con materiale P.F.U.(2014-02-17) Iacobini,Ivan; De Cindio,Bruno; Crea,FortunatoItem Contributo alla riossigenazione naturale della zona eufotica dei corpi idrici dovuto all'attività fotosintetica(2010-10-27) Nigro, Gennaro; Macchione, Francesco; Frega, Giuseppe; Infusino, ErnestoItem Costruzioni di nuove conoscenze condivise:la definizione di Tecniche urbanistiche e ambientali per centri di medie e piccole dimensioni(2008-10-29) Palermo, Annunziata; Francini, Mauro; d'Elia, Sergio