The global level thermodynamic comprehension of the local thermodynamic equilibrium assumption of nonequilibrium thermodynamics is presented. How the Gibbs relation given by equilibrium thermodynamics also describes a passage on an irreversible path is demonstrated at the global thermodynamic level. Specifically it has been demonstrated that the Gibbs relation indeed also takes care of irreversibility for a spatially uniform system with internal source of irreversibility for example the chemical reactions at non-vanishing rates and the irreversibility originating in the gradients of intensities across the boundary of the system. Hence, thermodynamic functions appearing in Gibbs relation are those for equilibrium and nonequilibrium states. The former is the case when it is used to describe a passage through the succession of equilibrium states and the latter is the case while describing the the passage through the succession of nonequilibrium states, i.e. on irreversible trajectories. Thereby, it gets demonstrated that the scope of validity of the local thermodynamic equilibrium assumption of nonequilibrium thermodynamics is much wider than it has been understood so far.
Keywords:An efficient heating system for the cabin of electric-driven vehicles (xEVs) is required to minimize the reduction in the driving range. However, studies regarding heat pumps that use waste heat from fuel cell stack, electric devices and battery in xEVs are limited. In this study, the heating performance of a coolant-source heat pump using waste heat in xEVs is investigated by varying coolant temperature, and coolant volumetric flow rate to have enough capacity in severe ambient conditions. A novel triple-fluid heat exchanger is introduced to recover waste heat from the fuel cell stack with high level and electric devices with relatively low level at different temperatures. The heating performance of the coolant-source heat pumps using the electric device coolant and the stack coolant shows different characteristics owing to the different temperature levels of the coolants. This study suggests optimum heat pump system of xEVs with respect to driving range characteristics under severe ambient conditions.
Keywords:Nanotechnology could be used to enhance the possibilities of developing conventional and stranded gas resources and to improve the drilling process and oil and gas production by making it easier to separate oil and gas in the reservoir. Nanotechnology can make the oil and gas industry considerably greener. There are numerous areas in which nanotechnology can contribute to more-efficient, less-expensive, and more-environmentally sound technologies than those that are readily available. We identified the following possibilities of nanotechnology in the petroleum industry: 1-Nanotechnology-enhanced materials that provide strength to increase performance in drilling, tubular goods, and rotating parts. 2- Designer properties to enhance hydro-phobic to enhance materials for water flooding applications. 3- Nanoparticulate wetting carried out using molecular dynamics 4- Lightweight materials that reduce weight requirements on offshore platforms 5- Nanosensors for improved temperature and pressure ratings 6- New imaging and computational techniques to allow better discovery, sizing, and characterization of reservoirs.
Nanoparticles have been successfully used in drilling mud’s for the past 50 years. Only recently all the other key areas of the oil industry, such as exploration, primary and assisted production, monitoring, refining and distribution, are approaching nanotechnologies as the potential Philosopher's stone for facing critical issues related to remote locations, harsh conditions (high-temperature and high-pressure formations), nonconventional reservoirs (heavy oils, tight gas, tar sands). The general aim is to bridge the gap between the oil industry and nanotechnology community using various initiatives such as consortia between oil and service companies and nanotechnology excellence centre’s, networking communities, workshops and conferences and even dedicated research units inside some oil companies. This paper provides an overview of the most interesting nanotechnology applications and highlights the potential benefits that could come from this technology to the oil and gas industry.
Production of biodiesel from food crops may cause negative economic, social, and environmental effects, therefore the alternatives are sought to satisfy the raw material demand for biodiesel production [1,2]. The aim of research is to evaluate the possibilities of application of non-food (Camelina sativa) ) oil, fatty wastes of animal origin and butanol for biodiesel production by applying biotechnological methods. The most effective biocatalyst suitable for the biodiesel synthesis from mixture of camelina oil and animal fat by transesterification with butanol was selected. The study involved six lipases as catalyst: Novozyme 435, Lipozyme TL IM, Lipozyme RM IM, F-EC, G “AMANO” 50. The synthesis of biodiesel was performed under the following conditions: temperature from 30 to 80oC; butanol-to-oil molar ratio from 1 to 7; enzyme content 3-17%; water content 0- 12%; duration 1-24 hours. Biodiesel properties were analysed according to the requirements of standards. Camelina oil is high in unsaturated fatty acids (more than 85 %) [3], iodine value of esters produced from camelina oil equals to 144 I2/100 g and exceeds the maximal value presented in the standard. In contrary, animal fatty waste is characterized by low iodine value (52 g I2/100 g) [4], the content of saturated fatty acids equals to 53 %. In order to meet the quality requirements presented in the standard, mixture camelina oil and animal fat in ratio 1:3 could be used for biodiesel production. For the investigations biocatalyst - lipase Lipozyme TL IM was selected. The optimal conditions for the production of biodiesel fuel were determined: 9 % of the lipase Lipozyme TL IM (of the weight of oil); molar ratio of oil and butanol – 1:6; temperature – 40 °C; duration – 6 hours in the first production stage. The optimal conditions of the second stage are as follows: lipase content – 5 %; molar ratio of oil and butanol – 1:8; temperature – 40 °C; duration of synthesis – 6 hours. It was determined that butylesters meet the standard requirements, when the additives of antioxidants Ionol BF 200 (1000 ppm) and depressant Chimec 6635 (2000 ppm) are used.
Keywords:[1] J.C. Escobara, E.S. Loraa, O.J. Venturinia, E.E. Yáñezb, E.F. Castilloc, O. Almazand, Renew. Sust. Energ. Rev. 13 (2009) 1275-1287. [2] J. Gardy, M. Rehan, A. Hassanpour, X. Lai, A.S. Nizami, J Environ Manage. 2491 (2019). [3] B.R. Moser, S.F. Vaughn, Bioresource Technolog 101 (2010) 646-653. [4] G Tashtoush., M. I. Al-Widyan, M.M. Al-Jarrah, Energy Convers Manag, 45 (2004) 2697-2711.
The requirements for renewable energy for high voltage transformers and large industrial gearboxes as installed in wind turbines on-/offshore rise. Ever more flexibility at maximum operational reliability and a long lifetime is required of them at the same time, so the requirements for the oil and the oil condition monitoring grow correspondingly. This presentation provides information about a novel online oil condition monitoring system to give a solution to the mentioned priorities for both energy sectors. The focus is set to high voltage transformers but the possibilities in monitoring applications of lubrication oils in hybrid applications (electrical vehicles) are also addressed.
The online oil sensor system measures the conductivity kappa, the relative permittivity epsilon r and the temperature T independently from each other. Based on a very sensitive measurement method with high accuracy, even small changes in the conductivity and dielectric constant of the oil composition can be detected reliably. The new sensor system effectively controls the proper operating conditions of high-voltage transformers, industrial gearboxes and in test rigs for electrical vehicles, where lubrication and isolation of the oil have a double function.
The system enables damage prevention of the high voltage transformer by an advanced warning time of critical operation conditions and parameter trending realized by a precise measurement of the electrical conductivity, the relative permittivity, loss factor tan delta and oil temperature based on a high time resolution. Corrective procedures and/or maintenance can be carried out before actual damage occurs.
Once the oil condition monitoring sensor systems are installed on the high voltage transformer or an oil regeneration plant, the measured data can be displayed and evaluated elsewhere in sense of a full online condition monitoring system.
24/7 monitoring of the system during operation enables specific preventive and condition-based maintenance independent of rigid inspection intervals.
In recent years, grid operators in Texas, California and the Midwest “shed load,” that is, forced many of their customers to a blacked-out (no electricity available) condition. These blackouts occurred when the grid’s power plants cannot supply enough electricity to match the demand on the grid. This paper describes the root cause of many of these blackouts, including the incentive system of power plant payments, the role of renewables in increasing the fragility of the grid, and the rules for dispatch on the grid.[1]
The Regional Transmission Organization (RTO) grids in New England and Texas are the main focus of the paper. These areas are also called the “Deregulated” areas, though that is a misnomer, since they are highly regulated, but not in the manner that most utilities were regulated before 1990. For the analysis paper, data was collected from the grid operator websites [2], from trade publication analysis, and from academic analysis [3].
The conclusion is that rules and incentives in RTO areas actively discourage investments in reliability. The RTO areas encourage the “fatal trifecta” of over-reliance on intermittent renewables, just-in-time delivery of natural gas, and hoped-for imports from neighboring grids. These changes make a grid more fragile, and are root cause of most of the recent blackouts
Biohydrogen has the potential to replace current hydrogen production technologies relying heavily on fossil fuels [1]. Dark fermentation has an excellent capability for evolution as a practical biohydrogen production because of high hydrogen yields [2]. The substrate concentration is one significant factor in hydrogen production. This study examined substrate concentration's influence on fermentative hydrogen production by Clostridium manihotivorum CT4. This microorganism was isolated from the cassava pulp, solid waste of the cassava starch factory [3]. Experiments were conducted in batch tests using glucose as a substrate varying between 0 to 25 g/L at 37 ºC and an initial pH of 7.0. Experimental results showed that during dark fermentation, the hydrogen production potential and hydrogen production rate increased following the increase in substrate concentration between 0 to 25 g/L glucose. In our batch assays, the lower glucose concentration that provided the highest hydrogen production yield and substrate degradation efficiency was 41.5 mL/g glucose and 96.2%, respectively, at the substrate concentration of 5 g/L glucose due to lower substrate concentration diminishes volatile fatty acids (VFAs) concentrations and further affecting the hydrogen production yield. An increase in the initial substrate concentration supplemented in this experiment caused a reduction in substrate utilization rate and hydrogen production yield. Because of the higher glucose concentration might have the accumulation of the products from glucose utilization, such as VFAs, causing the pH to drop to lower than 5.5 in the culture to become acidic and unsuitable for microbial growth [4]. These results will be useful for optimizing the substrate concentration and controlling the fermentative biohydrogen production process by using C. manihotivorum CT4 directly from glucose under these conditions.
Keywords:With the world's population increasing from seven billion currently to approximately nine billion by the year 2040, achieving a healthy lifestyle for all people on earth will depend, in part, on the availability of affordable energy, especially electricity. This work considers the various choices, or options, for producing electricity and the consequences associated with each option. The options are fossil, renewable, and nuclear. The consequences associated with these three options are addressed in five different areas: public health and safety, environmental effects, economics, sustainability, and politics. All options are needed, but some options are better than others when compared in the five areas. This presentation is a brief summary of a short course entitled “Energy Choices and Consequences”, which was initially created by the author several years ago and is continuously updated. The presentation will provide updated information through October of 2022.
Keywords:Gas reservoirs can be classified into dry gas reservoirs, wet gas reservoirs and Gas condensate reservoirs. In gas condensate reservoirs, the reservoir temperature lies between the critical temperature and the cricondentherm. The gas will drop out liquid by retrograde condensation in the reservoir, when the pressure falls below the dew point. This heavy part of the gas has found many application in industry and also in daily life and by remaining in reservoir not only this valuable liquid is lost but also its accumulation will result in forming a condensate bank near the well bore region which makes a considerable reduction in well productivity.
In this paper, gas injection will be studied in a gas condensate reservoir to increase the recovery factor moreover the capability of different injection gases (CO2, N2, CH4 and separator gas) will be compared through different injection schemes. The injection schemes which will be considered are: different injection rates, different injection pressures and different injection durations. We think that the response of the reservoir in different cases will be different but that injection of all of them can increase the condensate recovery. As many parameters can affect the decision of selecting the injection scheme, other than the gas and condensate recovery factor, doing an economical evaluation is inevitable to take them all into account and determine the best one.
In this paper, the efficiency of different schemes of gas injection and gas recycling in condensate recovery from a gas condensate reservoir, through compositional simulation has been studied and compared. The effect of changing injection rate, injection pressure and injection duration has been investigated by three injection gases (N2, CO2, CH4) and gas recycling. The appropriate and optimum case can be selected considering the results of the simulation work and doing an economical evaluation, taking into account all the parameters such as: the price of the gas and condensate, the price of the injection gases and the cost of the facilities needed in each scheme with regard to the present level.
This work presents an energy, exergy, and environmental evaluation of a novel compound PV/T (photovoltaic thermal) waste heat driven ejector-heat pump system for simultaneous data center cooling and waste heat recovery for district heating networks. The system uses PV/T waste heat with an evaporative-condenser as a driving force for an ejector while exploiting the generated electric power to operate the heat pump compressor and pumps. The vapor compression system assessed several environmentally friendly strategies. The study compares eleven lower global warming potential (GWP) refrigerants from different ASHRAE safety groups (R450A, R513A, R515A, R515B, R516A, R152a, R444A, R1234ze(E), R1234yf, R290, and R1243zf) with the hydrofluorocarbon (HFC) R134a. The results prove that the system presents a remarkable overall performance enhancement for all investigated refrigerants in both modes. Regarding the energy analysis, the cooling coefficient of performance (COPC) enhancement ranges from 15% to 54% compared with a traditional R134a heat pump. The most pronounced COPC enhancement is caused by R515B (a 54% COPC enhancement and 49% heating COP enhancement), followed by R515A and R1234ze(E). Concerning the exergy analysis, R515B shows the lowest exergy destruction, with the highest exergy efficiency than all investigated refrigerants.
Keywords:Formation damage, a reduction in the natural capability of a reservoir to produce its fluids, such as a decrease in porosity or permeability, or both, can occur near the well bore face (easier to repair) or deep into the rock (harder to repair). Formation damage is caused by several mechanisms: 1- physical plugging of pores by mud solids, 2- alteration of reservoir rock wettability, 3- precipitation of insoluble materials in pore spaces, 4- clay swelling in pore spaces, 5- migration of fines into pore throats, 6- introduction of an immobile phase, and 7- emulsion formation and blockage. Damage can occur when sensitive formations are exposed to drilling fluids. In this paper we discuss about damage during drilling, completion, production, work-over, what is the influence on formation damage?, What properties most influence the effect of formation damage?, damage mechanisms and how does formation mineralogy and clay chemistry influence damage? What about fines migration, scale, paraffin and asphaltenes, damage prevention and damage removal?
Keywords:Matrix acid stimulation is a relatively simple technique that is one of the most cost-effective methods to enhance well productivity and improve hydrocarbon recovery. Carbonate acidizing is usually performed with HCL except in situations where temperatures are very high and corrosion is an issue. Acids attack steel to produce solutions of iron salts while generating hydrogen gas. Over the years, many different acidizing systems have been developed for specific applications.
Matrix acidizing, with the appropriate systems in correctly identified candidate wells, is the most cost-effective way to enhance oil production in sandstone and carbonate reservoirs. Increased understanding of the chemistry and physics of the acidizing process as well as improvements in well site implementation have resulted in better acidizing success. Use of computer software that includes all known rules and guidelines for sandstone acidizing can greatly improve the success ratio by eliminating inappropriate designs and standardizing treatments. New acid systems with improved performance were developed specifically to address many of the problems inherent in sandstone acidizing.
Industrial systems such as mining, agriculture and manufacturing operations have be through several revolutions in history. These have included: human and animal powered, mechanization, automation and now are moving to smart/intelligent systems. This new revolution has been characterized as Industry 4.0. Industry 4.0 in turn is powered by the Internet of Things through new technologies such as networking, smart sensors, robots and artificial intelligence. The integration of these technologies together will result in a new style of industrial operation based on data models of the physical process being operated. For example, precise geo-representative information that digitally twins reality will allow a “virtual” machine operator to run very dangerous real processes within meta-models that create a virtual world without exposure to the hazard to the operator.
The teleoperation and automation of mobile mining machines and systems was experimented with at INCO Limited as early as 1986 with the Future ORE Manufacturing (FOREMAN) project. This was further advanced in the 1990 and 2000s with the “Mine Automation Program” or MAP in Canada and the “Intelligent Mine Program” in Scandinavia. The concept of these projects was the total operation of the mine from a surface control room. The objectives of these projects were to remove humans from the potentially dangerous mining environment to improve safety. To accomplish this goal: no equipment operators should travel into the underground environment. Secondary goals were the elimination of travel time and increased productivity. One machine operator running from a surface control room and subsequently one operator running several machines was further accomplished.
The MAP work done included a pilot scale mine, 175 Orebody, with robotized mining equipment and consumables (such as explosives, drill bits and more) tailored to the accomplishment of the process. At the core of the work, each piece of mobile equipment was fitted with computer controllers, sensors, actuators, networking equipment to robotize it. A broadband radio network was developed and installed throughout the mine that was linked to control stations that emulated cockpits of the mining machines. Machine control being transferred over the network. The accomplishment of this scenario was due to meeting the critical objectives of: large scale data transfer capacity and the near zero latency of the information transfer. The unique radio characteristics of being underground created the ability to use wireless control systems that consumed the entire radio frequency spectrum. The first steps included the teleoperation of mining equipment using radio modems, then moving to large scale Ethernet networks based on broadband radio networking using a system equivalent to today’s 5G. As mining automation continues to evolve towards full industrial application in mining, the next technology steps of Meta-Mining began.
In 2012, the extremely challenging problem of removing humans from the task of rock blockage removal was started. These systems required the full implementation of Industry 4.0 technology with virtual operation using meta models of the mining problem to keep operators safe. The Robotic Hang-up Assessment and Removal System (ROAR) was conceived and developed. ROAR was to provide a human/machine system to assess and remove
potentially dangerous rocks that currently threaten the life of a miner in its removal process. ROAR required the use of most advanced technologies, Industry 4.0, available. These included advanced newly patented optical communication, IoT, cloud computing utilizing gaming software, precision positioning, robotics and artificial intelligence. The ROAR platform and infrastructure now exists and has been developed and implemented.
This paper discusses the design, building and implementation of ROAR as the first Industry 4.0 device for mining, the implications of ROAR for Mining and Agriculture and future applications. The accomplishment of ROAR will have implications for the current technical and operational thinking in the future of Industry 4.0 that include operational strategies that will use Virtual Reality to enhance situational awareness in future META-Mining and META-Agricultural implementations terrestrially and in space.
VUCA (volatility, uncertainty, complexity and ambiguity) conditions are emerging all over the world and in sort of forms. Such conditions emerge in recent years primarily triggered due to the fact of anthropogenic climate change, affecting the weather conditions with severe hits in form of hurricanes, typhoons, sudden appearing devastating microclimate occurrence or similar. Earthquakes, volcano eruptions form also VUCA conditions. The COVID-19 pandemic hit global economies the recent years and will for sure prevent governments from their regular infrastructure planning and implementation works. Wars, causing distractions in raw material supply chains show similar results. In the opposite sometimes VUCA conditions appear due to a sudden vectoral change in a positive direction.
Despite the technological and political progress of humankind in form of communication and information access (i.e. internet), sustainable production (industrial, farming), digitalization, civil rights etc. VUCA conditions prevail and make emergency solutions essential. Emergency solutions tend to be more and more demanded by markets and governments, even leading to be used as mid-term and long-term solutions meeting the needs of consumers.
This paper will introduce the vectors leading to emergency conditions and needs in the power generation sphere and provide an insight on medium to large size powerships and powertrailers, while addressing major power generation trends such as renewable energy and storage technologies and their recent markets gains.
This presentation for the first time discusses the nonequilibrium thermodynamics of dynamic chemical equilibrium in a wide number of chemical reactions. They include two-step consecutive reactions and multi-step chain reactions. From chemical kinetics we learn that the dynamic chemical equilibria get established when there are (i) fast pre-equilibrium steps or (ii) produced highly reactive intermediate chemical species during the course of a reaction and for their concentrations the Bodenstein steady state approximation gets established. There result Q's, the quotients of concentration, as f(T, p) which generates stoichiometric equivalence of chemical potentials of the chemical species involved therein. In some cases one or more chemical affinities, A 's, of the steps involved in the reaction vanish but it is not true in all cases. Irrespective of vanishing or non-vanishing of A 's of the involved steps one still can use corresponding standard thermodynamic relation between corresponding dynamic equilibrium constant and the corresponding standard state chemical affinity, which is the thermodynamic condition of dynamic chemical equilibrium, corresponding chemical affinities are (i) that of some steps and they assume a zero value or (ii) when none of chemical affinities of the steps of the reaction vanish but one or more internal chemical affinities, become equal to zero. Also in such cases the Q(T,p) 's can be equally calculated using the volume independent partition functions, qk* 's, and the Avogadro number L. A thermodynamic condition of explosion in a chemical reaction gets described by the attainment of very large, positive or negative, values of chemical affinities of the steps involved.
Keywords:The following paper discusses the potential for nuclear power to satisfy the Flogen Sustainability Framework (FSF). The FSF focuses on three main criteria: environmental protection, economic development, and social development. Nuclear power can satisfy all three criteria simultaneously. Environmental protection is achieved by providing carbon-free energy. This is analyzed by using Germany’s nuclear phase-out as a case study and comparing the relative differences in their cumulative carbon emissions between the years 2000 to 2018. Economic development can be achieved when nuclear is given an even playing field, where fossil fuels are not subsidized. France has deployed an entire fleet of reactors and achieved one of the lowest electricity prices and lowest carbon footprint per capita among European nations. Furthermore, analyses of levelized cost of electricity (LCOE) in the United States has shown that nuclear power can be competitive with other alternative energy sources. Thirdly, the aggregated social costs of not maintaining or deploying more nuclear reactors can be enormous. In the aftermath of the Fukushima incident, the deaths caused by rising electricity prices due to imported fossil fuels were higher than the deaths from the disaster itself. Moreover, there are millions of deaths and labor-days lost due to fossil fuel induced air pollution every year. This puts a heavy cost on productivity, as well as on the healthcare systems of individual nations. Lastly, the issue of nuclear waste is addressed by exploring possible solutions with current technology through reduction, recycling, and storage.
Most countries depend on oil. States will go to great lengths to acquire an oil production capability or to be assured access to the free flow of oil. History has provided several examples in which states were willing to go to war to obtain oil resources or in defense of an oil producing region. States have even become involved in conflicts over areas which may only possibly contain oil resources. This trend is likely to continue in the future until a more economical resource is discovered or until the world's oil wells run dry. One problem associated with this dependence on oil is the extremely damaging effects that production, distribution, and use have on the environment. Furthermore, accidents and conflict can disrupt production or the actual oil resource, which can also result in environmental devastation. One potential solution to this problem is to devise a more environmentally-safe resource to fuel the economies of the world.
Keywords:The main purpose of a primary cementing job is to provide effective zonal isolation for the life of the well so that oil and gas can be produced safely and economically. Oil Well Cement as the name suggests, is used for the grouting of the oil wells, also known as the cementing of the oil wells. This is done for both, the off-shore and on-shore oil wells. It is manufactured from the clinker of Portland cement and also from cements that have been hydraulically blended. Oil Well Cement can resist high pressure as well as very high temperatures and sets very slowly because it has organic 'retarders' which prevent it from setting too fast. Oil Well Cement has proved to be very beneficial for the petroleum industry due to its characteristics. For it is due to the Oil Well Cement that the oil wells function properly. The various raw materials required for the production of Oil Well Cement are: 1-Limestone 2-Iron Ore 3-Coke 4-Iron Scrap.
Cement is also used to seal formations to prevent loss of drilling fluid and for operations ranging from setting kick-off plugs to plugging and abandonment. One of the most famous work-over jobs on an oil well to prevent extra gas and water production and to plug their passages is oil well cementing. The important things which we must consider in an oil well cementing job are: rheological, and physical properties such as density, fluid loss, thickening time, and water cement ratio under high pressure and temperature, effect of accelerators and retarders on cement slurries, compressive strength and permeability of cement plugs, additives for special applications such as elevated temperature and high influx of electrolytes. This paper presents the results of our studies about these subjects in oil well cementing.
The requirements in the power generation with biogas, gas and diesel engines rise. Ever more flexibility at a maximum operational reliability and a long-life time are required of them at the same time, so the requirements for the oil and the oil condition monitoring grow correspondingly. This presentation provides information about an online oil condition monitoring system to give a solution to the mentioned priorities. The focus is set to the detection of contamination effects in contrast to oil changes in gearboxes where the additive degradation is the dominating effect.
The online oil sensor system measures the components conductivity, the relative permittivity and the temperature independently from each other. Based on a very sensitive measurement method with high accuracy even small changes in the conductivity and dielectric constant of the oil composition can be detected reliably. The sensor system effectively controls the proper operation conditions of the engines and gearboxes instantaneously signals any kind of abnormal parameter change.
The system enables damage prevention of the engine by an advanced warning time of critical operation conditions and an enhanced oil exchange interval realized by a precise measurement of the electrical conductivity, the relative permittivity and the oil temperature. The WearSens® Index (WSi) which has been successfully implemented in wind power gearbox applications is quite flexible and can be adjusted to the engine monitoring as well. The mathematical model of the WSi combines all measured values and its gradients in one single parameter for a comprehensive monitoring to prevent the asset from expensive damage. Furthermore, the WSi enables a long-term prognosis on the next oil change by 24/7 server data logging. Corrective procedures and/or maintenance can be carried out before actual damage occurs. Raw data and WSi results of a landfill gas engine installation is shown. Short-term and long-term analysis of the data show significant trends and events, which are discussed more in detail.
24/7 monitoring of the system during operation enables specific preventive and condition-based maintenance and independent of rigid inspection intervals.
A new oil sensor system is presented for the continuous, online measurement of the wear in industrial gears, turbines, generators, transformers and hydraulic systems. The detection of change is much earlier than existing technologies such as particle counting or vibration measurement. Thus, targeted, corrective procedures and/or maintenance can be carried out before actual damage occurs. Efficient machine utilization, accurately timed preventive maintenance, a reduction of downtime and an increased service life can all be achieved.
The oil sensor system measures the components of the complex impedances X of the oils, in particular the electrical conductivity, the relative dielectric constant and the oil temperature.
Inorganic compounds occur at contact surfaces from the wear of parts, broken oil molecules, acids or oil soaps. These all lead to an increase in the electrical conductivity, which correlates directly with the wear. In oils containing additives, changes in dielectric constant infer the chemical breakdown of additives. A reduction in the lubricating ability of the oils, the determination of impurities, the continuous evaluation of the wear of bearings and gears and the oil aging all together follow the holistic approach of real-time monitoring of changes in the oil-machine system. By long-term monitoring and continuous analysis of the oil quality, it is possible to identify the optimal time interval of the next oil exchange – condition based.
The protection of pipelines from external corrosion is commonly accomplished by the combination of pipeline coatings with cathodic protection, to protect those portions of the pipeline that are inadequately coated or where the coating contains defects. Defects in pipeline coatings that expose bare steel are termed holidays. Conventional anode resistance formulas that ignore the current and potential distribution on the pipes are inadequate for the modelling of pipelines with holidays. Current and potential distribution must also be considered when modelling multiple pipelines. Factors such as variations in coating quality and stray current interference have an effect on the quality of the cathodic protection system. Another major factor in the design and maintenance of the underground infrastructure (e.g. pipelines, storage tanks, tunnels etc) is the electrical interference (electrical pollution) from power lines, railways and other electrical sources. Traditional resistance formulas are inadequate in modelling these complex interactions. Pipes with coated surfaces can be modelled in several ways. It was assumed in the analysis that the coating is a highly resistive barrier.
Keywords:Predictive maintenance evaluates the condition of equipment by performing periodic (offline) or preferably continuous (online) equipment condition monitoring. The ultimate goal of the approach is to perform maintenance at a scheduled point in time when the maintenance activity is most cost-effective and before the equipment loses performance within a threshold. This results in a reduction in unplanned downtime costs because of failure where for instance costs can be in the hundreds of thousands per day depending on industry. In energy production in addition to loss of revenue and component costs, fines can be levied for none delivery increasing costs even further. This is in contrast to time- and/or operation count-based maintenance, where a piece of equipment gets maintained whether it needs it or not. Time-based maintenance is labor intensive, ineffective in identifying problems that develop between scheduled inspections, and so is not cost-effective. The fundamental idea is to transform the traditional ‘fail and fix’ maintenance practice to a ‘predict and prevent’ approach.
The "predictive" component of predictive maintenance stems from the goal of predicting the future trend of the equipment's condition. This approach uses principles of statistical process control to determine at what point in the future maintenance activities will be appropriate.
Most predictive inspections are performed while equipment is in service, thereby minimizing disruption of normal system operations. Adoption of predictive maintenance can result in substantial cost savings and higher system reliability.
Reliability-centered maintenance emphasizes the use of predictive maintenance techniques in addition to traditional preventive measures. When properly implemented, it provides companies with a tool for achieving lowest asset net present costs for a given level of performance and risk.
One goal is to transfer the predictive maintenance data to a computerized maintenance management system so that the equipment condition data is sent to the right equipment object to trigger maintenance planning, work order execution, and reporting. By doing so the OPEX and CAPEX saving feature of predictive maintenance solution value is accelerated.
This paper and its attachments provide an insight of how the products of as an example cmc Instruments GmbH and others, help users to achieve their goals in setting up real and beneficiary PREDICTIVE MAINTENANCE MANAGEMENT SYSTEMS.
Photoelectrochemical (PEC) water splitting using semiconductor photoelectrodes is one of the most promising and environmentally friendly methods to produce hydrogen from water by utilizing renewable solar energy. Enormous efforts are being devoted to find adequate semiconductor materials for photoelectrodes. Tungsten oxide (WO3) is one of the most attractive semiconductor materials for PEC water splitting due to its energetically favorable valence band position for water oxidation, suitable band gap energy (~ 2.6 eV) to harvest considerable light within the solar spectrum (~ 12%), and appreciable photostability in water (< pH 4) [1]. In this study, we developed a facile, economical flame vapor deposition (FVD) process, in which a newly designed double-wire-feeder was incorporated into the flame reactor to realize constant feed rates of two solid precursors. Vertically-aligned nanowire-based sub-stoichiometric tungsten oxide thin films with controllable thickness were prepared with fast growth rate up to few hundred nanometer per minute, which could be converted to photoactive monoclinic WO3 by postannealing. The growth of branched NTs was realized in a FVD system incorporated with double wire feeders. Heterogeneous doping of nanowire and nanotree structures of WO3 was also achieved by this FVD system [2-5]. The PEC measurements with the prepared composite photocatalyst working electrode were carried out using a three-electrode electrochemical custom-built photocell embedded with a quartz window and equipped with saturated calomel electrode as reference electrode and a platinum mesh as counter electrode [3]. The nanostructured composite thin film prepared by FVD with double-wire-feeder in this study showed the better performances for PEC water splitting than those recently reported in the literature.
Keywords:[1] J.-R. Ding and K.-S. Kim, AIChE J., 62, (2016) 421-428. [2] J.-R. Ding and K.-S. Kim, Chemical Engineering J., 300, (2016) 47-53. [3] J.-R. Ding and K.-S. Kim, Chemical Engineering J., 334, (2018) 1650-1656. [4] J.-R. Ding, S.-H. Yoon, W. Shi and K.-S. Kim, AIChE J., 65 (4), (2019) 1138-1143. [5] S.-H. Yoon and K.-S. Kim, J. Ind. Eng. Chem., 73, (2019) 52-57.
The electricity markets in Europe were designed with the aim of lowering energy supply costs and thus increasing the competitiveness of the European industry. The large electricity markets in the USA also operate to reduce over-all costs in a certain network area. In a market dominated by fossil fuel power plants such a market works. The classic merit order is made up of nuclear generation plants (marginal costs almost zero) via coal and gas fired assets up to oil or gas fired peaking plants (low efficiency, high marginal costs) for the high demand hours. Plants with very low marginal costs and little flexibility will bid at (close to) zero. They accept the price level set by others. Higher cost (flexible) plants will bid more in line with opportunity costs and try to optimize income over fewer load hours.
Network capacity could keep pace with the growth of demand and generation because this growth was predictable and stable. Planning and building a large generation asset can take a decade. Today, the share of renewable generation in the generation mix is increasingly being felt. There are days in spring and summer on which Germany is completely supplied by solar and wind. This rapid change in the generation mix puts into question whether the current market design is still adequate.
The current electricity market design is built on a few assumptions:
1. that the owners of a power plant do have marginal costs,
2. that the owners of a power plant can decide to run the plant or not,
3. that location (in the power network) is not very relevant.
These effects are strengthened by the fact that in large areas power demand is not growing. The trend of energy efficiency will not be reversed. As a result of the above, (term) wholesale power prices have now become low and also little volatile. Price movements are limited because periods of oversupply of renewables are followed by periods of oversupply of fossil fueled power. The ‘lubricant’ of a market: price volatility has disappeared in the longer term markets and is only still there in the very short term markets.The other ‘new’ phenomenon is a shortage of network capacity. The planning and construction of power networks takes much longer that than planning and construction of (especially) solar PV assets. Also, the unplanned production pattern of renewable assets raises the question whether enough network capacity for all produced kWh’s has to be available at all times.
Although the variable costs of solar and wind may be low, these sources are not at all for free. The lack of meaningful market prices is not only a problem for fossil power plants. It is definitely also an issue for solar and wind. We need new signals to drive investment and dispatch. Because CAPEX are so dominant in renewable generation, the risk is very ‘front-loaded’. No investor will put his money in a market where (marginal) prices are close to zero. In a CAPEX game, the investment risk will have to be allocated up-front. It is hard to see how generators will take this risk on the basis of a market in which marginal system cost are decreasing. Long term fixed pricing for the generation capacity will have to be agreed before the investment is made.
What then has to be steered (through price signals) is the capacity to balance supply and demand. In other words, capacity to transport, store and deliver is much more important than the energy. Capacity is the real scarce resource. A market for capacity will have to be developed. What is meant here is not the capacity market advocated by power producers. It is not about back-up. What is meant here is the long term and short term pricing of capacity on the grid. This makes transparent where there is a scarcity of capacity and what is the value of linking supply to demand. If we can put such a system in place then the rational solution can be sought by all market players. The article elaborates how this can be done.
The sunlight inexhaustibly falls on the earth. In order to realize sustainable society, it is necessary to utilize its energy effectively. However, energy of the sunlight is difficult to use because of its low coherency. A solar-pumped laser that directly converts sunlight into laser was realized in 1965 [1]. In 2012, the authors succeeded in developing a solar-pumped laser system of 120 W using a large 4 m2 of Fresnel lens as the primary focusing concentrator [2]. However, the primary concentrator was so large that it was difficult to create with high accuracy and transparency. As a result, only 42% of solar light collection efficiency was obtained [3].
In this study, a solar concentrating system using a Fresnel lens and a flat mirror of 1 m2 class was developed. As a result, a solar light collection efficiency of 58.5% was realized. Furthermore, this system is able to track the sun with controlling yaw and pitch angles and it can keep the laser head horizontally.
The sunlight collected by the Fresnel lens is re-focused into the laser medium by the solar cavity. In this study, the solar cavity was manufactured by 3D printer using ABS resin. Using a 3D printer, it is able to accelerate optimization of solar cavity because required time is much shorter than other methods such as milling and drilling metal materials. We realized 2.43W laser output using the solar cavity of ABS resin made by 3D printer. However, we could keep lasing only for 18 seconds. This is probably due to the deformation of the solar cavity made of ABS resin because collected solar power is partially absorbed by it.
In the future, we will manufacture a solar cavity that does not deform due to the heat generated by light collection, aiming for stable lasing.
A solid oxide electrolyzer cell (SOEC) is a solid oxide fuel cell that runs in regenerative mode to achieve the electrolysis of water by using a solid oxide, or ceramic, electrolyte to produce hydrogen gas. The production of pure hydrogen is compelling because it is a clean fuel that can be stored, making it a potential alternative to batteries, methane, and other energy sources. Electrolysis is currently the most promising method of hydrogen production from water due to high efficiency of conversion and relatively low required energy input when compared to thermochemical and photocatalytic methods. The leveraged cost of hydrogen (LCOH) is the lowest with SOEC. Carrier and storage forms of hydrogen such as; green ammonia, medium pressure hydrogen gas, high pressure hydrogen gas, liquified hydrogen systems are easy to adopt.
This paper will highlight the basics of electrolyzer cell technology and explain the use of ready for the market technologies.
Fuel cells are seen as the efficiency technology of the future. The idea behind it is more than 180 years old: hydrogen plus oxygen generate electricity and heat. What now enables particularly efficient powering and heating, was used many years ago already in space - and now also in power generation, cogeneration, tri-generation and quadrant (plants with CO2 segregation for the beverage industry), transportation (cars, trucks and trains) and aviation.
This paper will highlight the basics of fuel cell technology and explain the use of ready for the market technologies
In view of the prevailing over skepticism about the sound thermodynamic base of the expression of the rate constant given by the traditional transition state theory (TST) of bimolecular reactions, its foundational ingredients are revisited in this paper. The inference drawn earlier of the existence of quasiequilibrium between the reactants and activated complexes has been properly amended. The need for this has been elucidated by showing that the use of quasiequilibrium amounts to use it as a pre-equilibrium step hence it implies that the conversion of activated complexes to the product molecules must be a slow step according to the basic principles of chemical kinetics. However, it has been demonstrated that the existence time of an activated complex is less than the time required to complete half of the molecular vibration of the activated complex. Which means they are highly reactive ones. Therefore, it is not the case of pre-equilibrium but, indeed, is the case of a steady state for the forward moving activated complexes, which is what Arnot had advocated earlier. However, we have demonstrated that the said steady state for the concentration of the forward moving activated complexes is a case of dynamic chemical equilibrium between the reactants and the forward moving activated complexes whose sound thermodynamic base has been elucidated by describing the corresponding nonequilibrium thermodynamics. Thus, the much-needed description of the thermodynamic base of TST given expression of the rate constant has been accomplished.
Keywords:The ongoing accumulation of greenhouse gas (GHG) concentrations in the atmosphere from various anthropogenic sources is believed to be the primary cause of the increasing earth's surface temperature. CO2 is the most significant GHG where the top anthropogenic sources of CO2 emissions are related to electricity generation and stationery industry sectors powered by fossil fuels. Among other technologies, carbon capture, utilization, and storage (CCUS) is expected to play a key role in addressing the GHG emission challenges. Since the early days of the oil and gas industry, CO2 injection in oilfields has been recognized to be an effective method for enhanced oil recovery (EOR). However, the worldwide CO2-EOR implementations remain modest. In this talk, the 50-year history of CO2-EOR is reviewed, where we highlight the key attributes of successes and failures. The recovery benefits and the challenges of CO2-EOR in relation to CO2 capture, transportation, and oil displacement in the subsurface are discussed. We then show, for the first time, a comprehensive map of the current CO2 emissions from stationary industrial sources in Saudi Arabia. We discuss the potential of CO2-EOR in Saudi Arabia and provide an estimate of CO2 storage in depleted hydrocarbon fields, as well as other geological formations, including deep aquifers and CO2 mineralization in basalts. We close with some thoughts regarding the role of the oil and gas industry in Saudi Arabia in capitalizing on this opportunity by promoting CCUS as a win-win technology.
Keywords:Hydrogen fuel cells are among the most promising devices for powering vehicles and are expected to soon become a major technology that will help us reduce greenhouse gas emission. In order to provide hydrogen to vehicles, a robust process-oriented analytical solution is required to properly certify its quality. ASDevices have spent the last 4 years developing a complete portfolio of technology for this very application.
Over the past years, ASDevices established itself as the experts in process-oriented ultra-trace analysis of permanent gases and NMHC in various matrices, including hydrogen. For sulfur analysis, detectors such as Sulfur Chemiluminescence Detectors (SCD) or Pulsed Flame Photometric Detectors (PFPD) have been traditionally used. However, due to their complexity, need for maintenance and use of hazardous gases such as ozone, these instruments are not adapted for continuous process monitoring and are limited to laboratory testing. More recently, ASDevices developed a robust method for the measurement of sulfur compounds using the Epd technology in combination with the iMov GC platform without the need of a sample concentrator.
Thanks to the high sensitivity of the Epd technology, sub-ppb level limits of detection have been achieved for the permanent gases, NMHC and sulfur compounds. Therefore, an explosionproof version of the iMov GC platform was designed and was configured to combine these analyzes, making it the first process-oriented instrument combining the analysis of permanent gases, NMHC and sulfur compounds in fuel-grade hydrogen.
The presentation will cover the various technologies and the results obtained from ASDevices solution which was installed at the Sinopec Yanshan plants that will generate H2 for the 2022 winter Olympics.
The global refrigeration industry (cold chain) encompasses a wide range of disciplines, including the food sector, where temperature-controlled warehouses, trucks and shipping containers maintain food safety, and the healthcare industry, where refrigeration preserves medicines and pharmaceuticals, including vaccines. It is estimated that the refrigeration sector consumes approximately seventeen percent of the global electricity production [1] and this is expected to grow in the coming years due to global warming.
Significant performance enhancements, reduction in energy consumption and greenhouse gas emission, and improved maintenance intervals can be achieved by using physics-based thermodynamic modeling methods [2-5] to develop a digital twin for a range of industrial refrigeration systems. Implementations have been demonstrated for stand-alone, single-loop commercial vapor compression refrigeration systems (refrigerators or commercial cooling units) and for multi-loop, multi-compressor industrial refrigeration systems used in temperature-controlled warehouses up to several hundred thousand square feet in size. Such digital twins enable real-time performance monitoring by computing mass- and energy-balances using measured data, and the calculated results can be trended and used by machine learning algorithms to identify common equipment failures and alert personnel to operational problems.
Examples are presented illustrating how the trended calculated results enable root-cause identification of a 40+% cooling capacity reduction, and a machine learning algorithm is presented demonstrating highly (98+%) accurate identification of the eight most common refrigeration system failure modes.