Atomic and molecular input data for plasma modelling : a user's perspective

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademic

34 Downloads (Pure)


With the advent of cheap, yet powerful computers, self-consistent numerical simulation has become a viable tool for understanding, designing and improving technological and scientific plasma sources. Nowadays, multi-dimensional models that are capable of simulating time-dependent discharge behaviour are in use at various universities and research institutes. One such computer code is Plasimo, a PLAsma SImulation MOdel that is being developed at Eindhoven University of Technology [1]. Plasimo provides kinetic (Monte Carlo), hybrid and fluid models for transport-sensitive and equilibrium plasma. It is obvious that codes like Plasimo require a multitude of input data to function properly, but the measurement or calculation of such data is mostly outside the project's reach. In this contribution we, as Plasimo developers, will therefore provide a user’s perspective of the subject of atomic and plasma data. In the first part of this contribution, we will provide an overview of the various sorts of input data that are needed for the types of plasma modelling that are supported by Plasimo. We will consider the differential and total cross sections and emission coefficients that are needed in kinetic studies, but also the higher-level quantities such as collision integrals, Gaunt factors and net collisional-radiative coefficients that are typically used in Maxwellian plasma models and models for plasmas under Local Thermal Equilibrium conditions. We will demonstrate how Plasimo manages this myriad of concepts, and deals with situations where input data are simply not available. The discussion will be guided by real-world examples of models for low- and high-pressure plasma sources. In the second part of the contribution, we will discuss how modern Internet technologies can help us to fulfil our input data needs. As of today, input data are typically either hard-coded in computer programs, or read from local input files. Moreover, data pre-processing tasks, like integrating cross sections to rate coefficients, are usually carried out locally as well. We will demonstrate how Web Services [2] can be used to manage, disseminate and manipulate data sets more conveniently. We will also identify various input data pre-processing tasks that could be taken over by data distributors, suggest how this could be implemented, and sketch the work flow that would result from such effort. [1] see: [2] see:
Original languageEnglish
Title of host publicationPresentation a t the VAMDC annual meeting, 21-24 February 2012, Vienna, Austria
Publication statusPublished - 2012
Eventconference; VAMDC Annual Meeting, 21-24 February 2012, Vienna, Austria; 2012-02-21; 2012-02-24 -
Duration: 21 Feb 201224 Feb 2012


Conferenceconference; VAMDC Annual Meeting, 21-24 February 2012, Vienna, Austria; 2012-02-21; 2012-02-24
OtherVAMDC Annual Meeting, 21-24 February 2012, Vienna, Austria


Dive into the research topics of 'Atomic and molecular input data for plasma modelling : a user's perspective'. Together they form a unique fingerprint.

Cite this