LearningTree

Quality & Evaluation of Software Architecture

Quality modeling, architecture evaluation, metrics & ATAM.

01
Chapter One

Introduction to Software Quality and Quality Modeling

Why Quality is So Important

Software Architecture and Quality are correlated. High-quality architecture leads to reliable, scalable, and maintainable systems β€” while neglecting quality results in costly failures and unsatisfied users.

βœ…

Benefits of High Quality

  • High Customer Satisfaction β€” higher user retention, more customers
  • Reduced Maintenance Costs β€” system is more scalable, reliable, performant
  • Competitive Advantage β€” easier to stand out in competitive market
⚠️

Drawbacks of Poor Quality

  • Low Customer Satisfaction β€” frequent crashes, slow performance β†’ customers switch
  • Increased Maintenance Costs β€” poor performance, low scalability, production issues β†’ lower profit margins
Architecture & Quality Correlation : Architecture Erosion Chain

When technical debt accumulates and architectural refactoring is neglected, systems undergo architecture erosion β€” a structural degradation that progressively worsens software quality.

⚠ Technical Debt β€” Lack of time & resources, no architectural refactoring
β–Ά
Architecture Erosion β€” Structural degradation over time
⚠ Architecture Erosion β€” Growing structural problems
β–Ά
↓ Understandability Β· ↑ Cost for changes Β· ↓ Quality goals met
⚠ Accumulated consequences compound
β–Ά
Poor Software Quality
Quality Definition
"Quality is the correspondence between the observed properties and the previously defined requirements of an observation unit."
Definition of quality based on IEC 2371
Quality Goals
requirements
β†’
Software
System
observation unit
β†’
Measured Quality
Attributes
observed properties
Quality Models

A quality model is a framework that helps define and categorize the attributes important for assessing software quality. It describes quality using criteria and specifies metrics to measure them.

1977 Β· US Air Force
Model of McCall
Three types of quality attributes
Product revisionProduct operationsProduct transition
1978 Β· Barry W. Boehm
Model of Boehm
Similar to McCall, more detailed hierarchy
As is utilityMaintainabilityPortability
1985 Β· HP
FURPS
Five quality attribute categories
FunctionalityUsabilityReliabilityPerformanceSupportability
International Standard
ISO/IEC 25010
Replaces ISO/IEC 9126 (2005). Guideline for system & software quality models.
Current Standard
Quality Standard β€” ISO/IEC 25010

"… degree to which a software product satisfies stated and implied needs when used under specified conditions." [ISO 25010:2011]

Aggregates attributes of a software product as a hierarchy of "quality characteristics" and "sub-characteristics" that relate to their suitability to fulfill defined or required needs.

Internal Quality
Developers & architects
⟹
External Quality
IT operations, user
⟹
Quality in Use
User in specific context
Quality Attributes β€” ISO/IEC 25010
Functional Suitability
Functional completeness
Functional correctness
Functional appropriateness
Performance Efficiency
Time behavior
Resource utilization
Capacity
Usability
Learnability Β· Operability
User error protection
Accessibility
Reliability
Maturity Β· Availability
Fault tolerance
Recoverability
Security
Confidentiality Β· Integrity
Non-repudiation
Accountability Β· Authenticity
Maintainability β˜…
Modularity Β· Reusability
Analyzability Β· Modifiability
Testability
Portability
Adaptability
Installability Β· Replaceability
Compatibility
Co-existence
Interoperability
Quality Tree

Quality goals structured as a tree. Quality scenarios form the leaves with priorities. A quality tree is specific to a particular system (quality model: generic).

Quality
Performance
(Time behaviour)
low Files are parsed within < 5 seconds
Performance
(Resource utilization)
high Application can be run with 1 GB of RAM
Maintainability
(Modifiability)
med New rules can be defined in < 4 hours
Quality Scenarios
  • Provide more detail for specific quality requirements
  • Describe how a system should behave when a specific stimulus occurs
  • Allow to easily measure and determine whether quality requirements are fulfilled
Three Types of Quality Scenarios
1
Usage Scenario
also: Application scenario
  • Typical interactions between users and system
  • Describes: efficiency, performance etc.
"When a user retrieves their order history, the response should be sent within 500ms."
2
Growth Scenario
also: Change / Modification scenario
  • Software's ability to adapt to modification
  • Adding additional functionality
  • How the system scales with demand changes
"When concurrent users double, our system continues operating at the same level of performance."
3
Exploratory Scenario
also: Boundary / Stress / Failure scenario
  • How well the system responds to extreme situations
  • Power outages, sudden traffic spikes etc.
  • Availability, fault tolerance under stress
"When load increases to 10,000 simultaneous users, server response time remains below 2 seconds."

Quality requirements need to be Exact and Measurable β€” otherwise, we cannot evaluate what we cannot measure.

Note: Not all quality requirements are easily measurable
πŸ“‹ Chapter 1 β€” Summary
  • Motivation for defining and evaluating the quality of a software system
  • Quality definition: "The correspondence between the observed properties and the previously defined requirements of an observation unit"
  • Quality models (Model of McCall, Model of Boehm, FURPS, ISO/IEC 25010)
  • Break quality goals into quality scenarios and build quality trees.
  • Three types of quality scenarios:
    • Usage Scenario (also called Application scenario)
    • Growth Scenario (also called Change / Modification scenario)
    • Exploratory Scenario (also called Boundary or Stress / Failure scenario)
02
Chapter Two

Introduction to Evaluation of Software Architecture

Evaluation of Software Architecture Patterns, Design Principles, Cross Cutting Concerns etc Functional Requirements Quality Requirements Constraints Software Architecture ? Quality
Why Evaluate Architecture?
Is the Architecture Good Enough?
  • Identify risks in the architecture
  • Verify achievement of quality goals
  • Verify all stakeholder concerns are met
  • Verify conformance to design decisions
  • Identify critical parts within the system
  • Measure and compare with known metrics

"You cannot control what you cannot measure."

β€” Tom DeMarco
Evaluation Flow
Functional Req.
+
Quality Req.
+
Constraints
β†’
Architecture
β†’ Quality?
Sources of Information for Quality Analysis
πŸ“
Architecture Documentation
UML / Architecture diagrams
Cloud diagrams Β· Data models
Interfaces, cross-cutting concerns
πŸ“‹
Requirements Documentation
Functional & Quality requirements
Quality scenarios Β· User surveys
Interviews and feedback
πŸ’»
Source Code
+ Metrics from static analysis
Complexity Β· Dependencies
Size of components
πŸ•
Revision History
Rate of changes over time
May uncover: incorrect requirements, low cohesion components
πŸ§ͺ
Test Cases & Results
Acceptance testing
Penetration tests Β· etc.
⚑
Runtime Events
Metrics Β· Error logs
Crashes + Users/System errors
Static vs. Dynamic Analysis
πŸ“Š Static Analysis
  • Metrics β€” LOC, coupling, complexity…
  • Reviews of Code & Design
  • Audits of Code & Design
  • Structural: change/bugfix effort per subsystem, errors per component
VS
⚑ Dynamic Analysis
  • Metrics β€” time & resource usage
  • Tests: Performance Tests, Security Tests (Fuzzing), Usability Tests

Static ↔ Dynamic interdependency: Metrics from static analysis may affect dynamic analysis β€” e.g. high cyclomatic complexity can indicate potential runtime performance issues. Conversely, metrics from dynamic analysis may influence static analysis β€” e.g. test coverage metrics may drive changes in code structure, affecting metrics like LOC, coupling, etc.

Quantitative vs. Qualitative Analysis
πŸ”’ Quantitative Evaluation

Software β†’ some number

  • Measure & compare with known quantities (LOC, dependencies, complexity, test coverage…)
  • Problem cases: Concepts, structures, decisions, documents
πŸ’‘ Qualitative Analysis
  • Identification of risks
  • Shows (non-)achievement of quality requirements
πŸ“‹ Chapter 2 β€” Summary
  • Motivation + importance of evaluating software architecture to:
    • Identify risks
    • Prevent growing complexity
    • Ensure achievement of quality goals
    • Conformance of implementation to the design decisions
  • Sources of information:
    • Architecture Documentation
    • Requirements Documentation
    • Source Code
    • Revision history
    • Test Cases and Results
    • Runtime events
  • Types of assessment for software architecture:
    • Static vs dynamic
    • Quantitative vs qualitative
03
Chapter Three

Quantitative Evaluation of Software Architecture & Goodhart's Law

Quantitative Evaluation β€” Software Metrics
  • Metrics β€” Measurable indicators used to assess the characteristics and quality of the software architecture
    • Can be used to measure the system both:
      • Statically; and
      • Dynamically
  • Requirements
    • Rate of change
    • Example: A high rate of change may indicate:
      • We didn't do a good job of identifying and analyzing the requirements.
      • The system is not flexible enough
  • Source Code
    • Size in lines of code (LoC)
    • Complexity (e.g., cyclomatic complexity)
    • Dependencies between building blocks
      • Afferent vs Efferent coupling
    • Cohesion
      • Note: Needs manual assessment
  • Failure
    • Mean Time Between Failures (MTBF)
    • Mean Time To Recovery (MTTR)
    • Uptime/downtime
    • Error Rate

⚠ Note: Watch out for error clusters! Components where many errors have been found probably contain even more.

  • Performance
    • Latency
    • Throughput
    • Utilization
    • Saturation
  • Software Process
    • Number of implemented / tested features over time
    • Meeting time in relation to working time
    • Number of managers, developers, testers
  • Test
    • Number of tests
      • In total
      • Per class or package
      • Per requirement
    • Test coverage (percentage)
Cyclomatic Complexity

Cyclomatic complexity measures the number of independent paths through a program's code. More decision points (if, else, switch, loops) = more paths = higher complexity = harder to test and maintain.

M = E βˆ’ N + 2
M = ComplexityE = # EdgesN = # Nodes

βœ… Low Complexity (M = 1)

No branches β€” single straight path

Start A B C End

4 edges, 5 nodes β†’ M = 4 βˆ’ 5 + 2 = 1

⚠️ High Complexity (M = 4)

Multiple branches β€” many paths

Start if? A if? B C End

8 edges, 6 nodes β†’ M = 8 βˆ’ 6 + 2 = 4

Rule of thumb: M = 1–5 is simple and easy to test. M = 6–10 needs attention. M > 10 is risky β€” consider refactoring into smaller functions.

Afferent vs. Efferent Coupling
Afferent Coupling (Ca)Incoming connections from other components
Efferent Coupling (Ce)Outgoing connections to other components
Ca=0, Ce=3 Ca=2, Ce=2 Ca=3, Ce=0 PackageB ClassD ClassE PackageC ClassF ClassG PackageA ClassA ClassB ClassC
I = Ce / (Ce + Ca)I = 0 β†’ Stable Component  Β·  I = 1 β†’ Unstable Component
I = 0
Stable
← Instability β†’
I = 1
Unstable
Software Metrics Reference
CategoryMetricWhat it may indicate
RequirementsRate of changeHigh rate β†’ poor analysis, inflexible system
Source CodeLines of Code (LoC)Component size; identify overly large components
Source CodeCyclomatic ComplexityHigh β†’ harder to test, maintain, understand
Source CodeAfferent / Efferent CouplingHigh efferent β†’ unstable component
FailureMTBF / MTTRSystem reliability and recovery speed
FailureError Rate / UptimeWatch for error clusters β†’ more bugs likely nearby
PerformanceLatency / ThroughputResponse time, capacity under load
PerformanceUtilization / SaturationResource usage, bottleneck detection
Software Process# features over timeDeclining rate β†’ technical debt accumulation
Software ProcessMeeting vs. working timeHigh ratio β†’ inefficient processes, coupling issues
TestsTest coverage (%)Low β†’ high bug risk; 100% may be unnecessary overhead
Goodhart's Law
"When a measure becomes a target, it ceases to be a good measure."
β€” Charles Goodhart
⚠ Test Coverage Trap
  • As a metric: Good indicator to detect bug risks
  • As a goal (e.g. 100%): Becomes meaningless β€” developers write trivial tests just to hit the target
⚠ Lines of Code Trap
  • As a metric: Can help find overly complex parts
  • As a goal: Developers will abuse it β€” writing verbose or padded code
Key Principle: Use metrics as indicators, not as goals. Metrics should inform decisions, not become the objective themselves.
πŸ“‹ Chapter 3 β€” Summary
  • Metrics are:
    • Measurable indicators used to quantitatively assess the characteristics and quality of the software architecture.
  • Examples of metrics:
    • Requirements β€” Rate of change
    • Source Code β€” LoC, cyclomatic complexity, Coupling (Afferent, Efferent)
    • Failure β€” MTBF, MTTR, Uptime/downtime, Error rate
    • Performance β€” Latency, Throughput, Utilization, Saturation
    • Software Process β€” Number of implemented/tested features over time, time in meetings, number of managers etc.
    • Tests β€” Number of tests, test coverage etc.
  • Goodhart's Law β€” "When a measure becomes a target, it ceases to be a good measure."
04
Chapter Four

Qualitative Assessment of Software Architecture & ATAM

Quantitative vs. Qualitative Analysis
  • Quantitative evaluation (software β†’ some number)
    • Measure, compare with known quantities (e.g. LOC, dependencies, complexity, test coverage, …)
    • Problem cases: Concepts, structures, decisions, documents
  • Qualitative analysis and assessment
    • Identification of risks
    • Shows (non-)achievement of quality requirements
Qualitative Analysis
Compare Requirements & Constraints Against the Solution

Assessments based on scenarios β€” they describe possible usages of the system by an actor, help view architecture decisions from different perspectives, and help identify quality criteria even when requirements are incomplete.

METHODOLOGIES:
ATAMHarris ProfileDCARCBAM
ATAM β€” Architecture Tradeoff Analysis Method
What is ATAM?
  • Selection of a suitable software architecture for a system
  • Scenario-based assessment regarding quality goal fulfilment
FOCUSES ON:
⚠ Risksβ‡Œ Trade-offsβ—Ž Sensitivity Points
Sensitivity Points: aspects particularly sensitive to changes in environment or requirements.
ATAM Prerequisites
Required πŸ‘· Architect πŸ‘₯ Customer reps / Functional experts πŸ“„ Architecture Documentation
ATAM in a Nutshell
1 Gather information about target architecture & quality goals 2 Assess actual system concerning quality goals 3 Suggest improvement actions General process applying to all software evaluation approaches
ATAM Conceptual Scheme
Business Drivers Architectural Plan Quality Attributes Architectural Approaches Scenarios Architectural Decisions Analysis Tradeoffs Sensitivity Points Non-Risks Risks Risk Themes Distilled into Impacts
ATAM: Quality (Utility) Tree

Business Drivers β†’ Quality Attributes β†’ Scenarios with priority notation: (Importance, Implementation difficulty)

Utility
Performance
M,L Minimize storage latency on customer DB to 200ms
H,M Deliver video in real time
Modifiability
L,H Add CORBA middleware in < 20 person-months
H,L Change web user interface in < 4 person-weeks
Availability
L,H Power outage at Site 1 β†’ traffic re-direct to Site 2 in < 3 secs
M,M Restart after disk failure in < 5 mins
H,M Network failure detected and recovered in < 1.5 mins
Security
L,H Credit card transactions secure 99.999% of time
L,H Customer database authorization works 99.999% of time
Priority: (Importance, Implementation difficulty) β€” H=High Β· M=Medium Β· L=Low
The Steps of ATAM β€” Two Phases
Phase 1
1
Presentation of the ATAM Method
2
Presentation of Business Drivers
3
Presentation of Architecture
4
Identification of Architectural Approaches
5
Generation of Quality Attribute Utility Tree
6
Analysis of Architectural Approaches
Phase 2
7
Brainstorming and Prioritization of Scenarios
8
Analysis of Architectural Approaches
9
Presentation of the Results
⟳ A break of several weeks separates Phase 1 and Phase 2
Proven Benefits of ATAM
βœ“
Maps unique requirement for each quality attribute
βœ“
Improves architecture documentation
βœ“
Documented basis for the architectural decisions
βœ“
Identifies risks early in the life cycle
βœ“
Improves communication between stakeholders
Other Assessment Methods
Harris Profile

Scenario-based matrix comparing requirements across different architectural scenarios (best, hybrid, progressive web app).

DCAR

Decision Centric Architecture Review β€” focuses on architectural decisions through a structured 9-step process from preparation to retrospective.

CBAM

Cost Benefit Analysis Method β€” evaluates architectural decisions based on cost-benefit tradeoffs for quality attribute achievement.

πŸ“‹ Chapter 4 β€” Summary
  • Two approaches to assess the quality of software architecture:
    • Quantitative Assessment approach
      • Uses metrics to measure the characteristics of our system β†’ evaluate the quality of our software architecture
    • Qualitative Assessment approach
      • Uses scenarios to compare the requirements and constraints to the proposed software architecture.
      • Example: ATAM (Architecture Tradeoff Analysis Method)
Summary β€” Quality & Evaluation at a Glance
01 Β· Quality Modeling

Defining Software Quality

  • Quality = meeting explicit & implicit requirements
  • ISO 25010 quality model β€” 8 characteristics
  • Quality scenarios: stimulus β†’ response β†’ metric
02 Β· Evaluation Intro

Why & How to Evaluate

  • Validate architecture against quality goals early
  • Quantitative (metrics, benchmarks) vs. qualitative (review, scenarios)
  • Evaluate continuously β€” not just once at the end
03 Β· Quantitative & Goodhart's Law

Metrics & Their Pitfalls

  • Measure: response time, throughput, code coverage, complexity
  • Goodhart's Law β€” when a measure becomes a target, it ceases to be a good measure
  • Use metrics as indicators, not absolute truths
04 Β· ATAM

Qualitative Assessment

  • Architecture Tradeoff Analysis Method
  • Scenario-based: compare requirements to architecture decisions
  • Identify sensitivity points, trade-offs & risks