Dependability is defined in  and  as "the ability of a system to deliver service that can justifiably be trusted". The service delivered by a system is its behavior as it is perceived by another system (human or physical) interacting with it. A service can deviate from its desired functionality. The occurrence of such an event is termed a failure. An error is defined as the part of the system state that may cause a failure. A fault is the determined or hypothesized cause of an error. It can be active, when it produces an error and dormant otherwise. A system fails according to several failure modes. A failure mode characterizes a service that does not fit with its desired functionality according to three parameters: the failure domain (value domain or time domain), the perception of the failure by several users of the system (consistent or incosistent), and the consequences of the failures (from insignificant to catastrophic).
Dependability is a concept that covers, in fact, several attributes. From a quality point of view, reliability, or the continuity of a correct service, and availability, expressing the readiness for a correct service, are important characteristics for any system.
Safety is the reliability of the system regarding critical failure modes, or failure modes leading to catastrophic, severe, or major consequences . This attribute characterizes the ability of a system to avoid the occurrences of catastrophic events that may be very costly in terms of monetary loss and human suffering. One way to reach the safety objective is, first, to apply a safe development process in order to prevent and remove any design faults. This method has to be completed, in the design step, with an evaluation of the system's behavior. This can be achieved through a qualitative analysis (identification of failure modes, component failures and conditions leading to a system failure through formal modeling and analysis - see for example ) and a quantitative analysis (the probability evaluation applied to some parameters for the analysis of dependability properties). The last means for reaching dependability is to apply a fault-tolerant approach . An important quantitative analysis issue in designing fault-tolerant systems is how to balance the amounts of failure detection, recovery and masking redundancy used in the system, in order to obtain the best possible overall cost/performance/dependability results - see for example .
Dependability and Security have a lot in common (,  and ). Security refers to how robust a system is with respect to a particular security policy. Dependability refers to how robust a system is with respect to some fault and the definition of a fault is analogous to a security policy. It is widely accepted today (, ) that security is a subset of dependability. If there is a way to violate a security policy (see for example ), then there is a fault. The security policy always seems to be part of the particular definition of "robust" that is applied to a particular system.
References related to the generic concepts of Dependability, Safety and Security
 A. Avizienis, J. Laprie, and B. Randell. Fundamental concepts of dependability. In Proceedings of the 3rd Information Survivability Workshop, Boston, USA, 2000, pp. 7-12
 A. Avizienis, J. Laprie, B. Randell, and C. Landwehr. Basic concepts and taxonomy of dependable and secure computing. IEEE Transactions on Dependable and Secure Computing, 1: 11-33, 2004
 ARTIST, Project IST-2001-34820. Selected topics in embedded systems design: roadmaps for research, May 2004
 Related articles authored by P. Katsaros: A, B, C, D, E and others
 F. Cristian. Understanding fault-tolerant distributed systems. Communications of the ACM, 34/2: 56-78, 1991
 Related articles authored by P. Katsaros: F, G, H, I, J, K, L and others
 C. Meadows. Applying the dependability paradigm to computer security. In Proceedings of the Workshop of New Security Paradigms, La Jolla, USA, 1995, pp. 75-79
 C. Meadows. Applying the dependability paradigm to computer security: then and now, In Proceedings of the Workshop on Principles of Dependable Systems, Dependable Systems and Networks 2003, San Francisco, USA, June 2003
 P. Verissimo, Dependability, Security, two faces of a same coin?.Invited talk, In Proceedings of the Workshop on Principles of Dependable Systems, Dependable Systems and Networks 2003, San Francisco, USA, June 2003
 J. Viega and G. McGraw, Building secure software, Addison-Wesley Professional Computing Series, 2002 (pp. 15)
 R. Anderson, Security Engineering: A guide to building dependable distributed systems, Wiley, 2001
 Related articles authored by P. Katsaros: M , N, O and others
Important Annual Conferences
Computer Safety, Reliability and Security (SAFECOMP)
CBSE is the discipline of the development of software components and the development of systems incorporating such components. Component-based systems are built by assembling components developed independently of the systems. The general-purpose component technologies currently available (CORBA CCM, JavaBeans, DCOM, .NET) cannot cope with the nonfunctional requirements of such systems (reliability, availability, safety, security etc). These additional requirements call for new technologies and new methods of software modeling and software verification.
Important Annual Conferences
Component-Based Software Engineering (CBSE)