Skip to main content

System Lifecycle

This document covers the system development lifecycle (SDLC) for the DSE ICT examination. Understanding the SDLC is essential for analysing how information systems are developed, from initial problem identification through to ongoing maintenance.


Overview of the System Development Lifecycle

The system development lifecycle (SDLC) is a structured methodology for developing information systems. It consists of distinct phases, each with specific activities and deliverables.

Phases of the SDLC

PhasePurposeKey Deliverables
AnalysisUnderstand the problem and identify requirementsRequirements specification, feasibility study
DesignPlan the solution (hardware, software, data, UI)System specifications, designs, diagrams
ImplementationBuild the system according to the designWorking system, programs, database
TestingVerify the system works correctly and meets requirementsTest reports, bug fixes
DocumentationProduce user and technical documentationUser manual, technical guide, help files
EvaluationAssess whether the system meets its objectivesEvaluation report
MaintenanceKeep the system running and up to date after deploymentUpdates, patches, enhancements

Phase 1: Analysis

Purpose

The analysis phase investigates the current system and the problem it needs to solve. The analyst gathers information about what the new system must do (requirements) without specifying how it will be built.

Activities

ActivityDescription
Problem identificationDefine the current problem clearly and specifically
Fact findingCollect information about the current system and requirements
Requirements analysisDetermine what the new system must do (functional requirements)
Feasibility studyAssess whether the project is viable and worthwhile
Requirements specificationProduce a formal document listing all requirements

Fact-Finding Methods

MethodDescriptionAdvantagesDisadvantages
InterviewsFace-to-face or structured interviews with stakeholdersIn-depth, allows follow-up questionsTime-consuming, may be biased
QuestionnairesWritten surveys distributed to a large number of usersReaches many people, anonymousLow response rate, rigid questions
ObservationWatch users performing their current tasksReveals actual practicesUsers may change behaviour when observed
Document analysisExamine existing documents, forms, reports, manualsProvides existing data sourceMay be outdated or incomplete
SamplingExamine a representative subset of dataEfficient for large datasetsSample may not be representative

Types of Requirements

TypeDescriptionExample
FunctionalWhat the system must do"The system shall calculate monthly payroll"
Non-functionalHow the system should perform"The system shall respond within 2 seconds"
Input requirementsData inputs the system must accept"Accept student name, class, and scores"
Output requirementsOutputs the system must produce"Print a class list sorted by name"
Storage requirementsData that must be stored and retained"Store 5 years of attendance records"
PerformanceSpeed, throughput, response time requirements"Handle 100 concurrent users"

Feasibility Study

A feasibility study evaluates whether a project is worth pursuing. It examines multiple dimensions of feasibility.

Feasibility TypeDescriptionKey Questions
TechnicalCan the system be built with available technology and expertise?Do we have the hardware, software, and skills?
EconomicIs the project financially viable?Do the benefits outweigh the costs? Will it save money?
OperationalWill the system be accepted and usable by the organisation?Will staff use it? Does it fit existing workflows?
ScheduleCan the project be completed within the required timeframe?Do we have enough time? Are the deadlines realistic?
LegalDoes the project comply with relevant laws and regulations?Data protection (PDPO), copyright, accessibility laws
Worked Example: Feasibility Study for a School Library System

Scenario: A school wants to replace its manual library catalogue with a computerised system.

Technical feasibility:

  • The school has computers, a network, and a server room.
  • Library management software is commercially available (e.g., Koha, SLIMS).
  • A local IT company can provide installation and support.
  • Assessment: Feasible.

Economic feasibility:

  • Software cost: HKD 50,000 (one-time) + HKD 10,000/year (maintenance).
  • Hardware upgrades: HKD 20,000 (barcode scanner, additional workstation).
  • Staff training: HKD 5,000.
  • Total first-year cost: HKD 85,000.
  • Benefits: Reduced book search time (estimated 30 minutes/day saved for librarian), better tracking, fewer lost books (estimated savings of HKD 10,000/year).
  • Payback period: ~8.5 years. Assessment: Marginally feasible; long payback period.

Operational feasibility:

  • The librarian is willing to learn the new system.
  • Teachers and students will benefit from online catalogue search.
  • The system fits the existing workflow (borrowing/returning books).
  • Assessment: Feasible.

Schedule feasibility:

  • Installation during summer break (6 weeks available).
  • Training: 2 days for librarian, 1 day for teachers.
  • Data entry (cataloguing existing books): Estimated 4 weeks with part-time assistant.
  • Assessment: Feasible if started early in the summer break.

Legal feasibility:

  • Student data used in the library system is subject to PDPO.
  • The school must obtain consent for data collection and implement security measures.
  • Assessment: Feasible with proper PDPO compliance.

Recommendation: Proceed with the project, starting at the beginning of the summer break to allow sufficient time for data entry. Negotiate software cost or consider open-source alternatives to improve economic feasibility.


Phase 2: Design

Purpose

The design phase plans how the system will be built. It translates the requirements into detailed specifications for hardware, software, data, and user interface.

Design Activities

ActivityDescription
Hardware designSpecify the required hardware (servers, workstations, network)
Software designPlan the program structure, modules, and algorithms
Data designDesign the database schema, tables, relationships, data dictionary
Input designDesign data entry screens, forms, validation rules
Output designDesign reports, screen layouts, printed outputs
User interface designDesign the interaction between users and the system
Security designPlan access controls, encryption, backup procedures

Design Tools and Techniques

ToolDescriptionUsed In
System flowchartShows the flow of data through the entire systemSystem design
Data flow diagramShows how data moves between processes, stores, and entitiesSystem design
Entity-relationship diagramModels the data entities and their relationshipsData design
Structure chartShows the hierarchical decomposition of the system into modulesSoftware design
PseudocodeStructured description of algorithm logicSoftware design
WireframeSketch of the user interface layoutUI design
Data dictionaryDetailed description of every data item in the systemData design

Data Flow Diagrams (DFD)

DFDs show how data flows through a system. They use four symbols:

SymbolMeaning
External entityA source or destination of data (rectangle)
ProcessA data transformation activity (rounded rectangle)
Data storeA repository of data (open-ended rectangle)
Data flowMovement of data (arrow)

Levels of DFD:

LevelNameDescription
Level 0Context diagramShows the entire system as a single process with external entities
Level 1OverviewMajor processes and data flows within the system
Level 2DetailedExpanded view of individual Level 1 processes

Phase 3: Implementation

Purpose

The implementation phase builds the system according to the design specifications. This includes writing software, installing hardware, configuring the system, and migrating data.

Implementation Methods

MethodDescriptionAdvantageDisadvantage
Direct changeoverOld system is replaced by the new system on a single dateQuick, low parallel running costHigh risk; no fallback
Parallel runningBoth old and new systems run simultaneously for a periodSafe; can compare resultsExpensive; double workload
Phased implementationSystem is introduced in stages, one module at a timeLower risk per phaseTakes longer; temporary hybrid
Pilot runningNew system is trialled with one department or location firstRisk limited to pilot groupPilot may not represent whole organisation
Worked Example: Choosing an Implementation Method

Scenario: A hospital is replacing its patient record system with a new electronic health record (EHR) system.

Recommendation: Phased implementation, starting with a pilot.

Reasoning:

  1. Direct changeover is too risky: Patient safety is critical. If the new system fails, doctors would have no access to patient records, which could endanger lives.
  2. Parallel running is impractical: Doctors and nurses already have heavy workloads. Maintaining two systems simultaneously would be burdensome and error-prone.
  3. Phased implementation with pilot: Start with one department (e.g., outpatients) as a pilot. Identify and fix issues before rolling out to other departments (inpatients, surgery, pharmacy). This limits risk while allowing iterative improvement.

Rollout plan:

PhaseDepartmentDurationNotes
1Outpatients2 monthsPilot; intensive monitoring
2Inpatients2 monthsApply lessons from Phase 1
3Pharmacy1 monthIntegration with prescription system
4Surgery1 monthFinal department

Data Migration

Moving data from the old system to the new system requires careful planning.

StepActivityPurpose
1Data extractionExtract data from the old system
2Data cleaningRemove duplicates, fix errors, standardise
3Data transformationConvert to the new system's format
4Data loadingImport into the new system
5Data verificationCompare old and new data for consistency

Phase 4: Testing

Purpose

Testing verifies that the system works correctly, meets requirements, and is free from defects.

Types of Testing

Test TypeDescriptionWhen Performed
Unit testingTests individual components (functions, modules) in isolationDuring development
Integration testingTests how components work togetherAfter unit testing
System testingTests the entire system as a wholeAfter integration testing
Acceptance testingTests whether the system meets the user's requirementsBefore handover to users
Alpha testingTesting by the development team in a controlled environmentBefore beta testing
Beta testingTesting by real users in their normal environmentBefore final release

Test Data

Data TypeDescriptionPurpose
NormalTypical, expected valuesVerify normal operation
BoundaryValues at the edges of valid rangesCheck edge cases
ErroneousInvalid, out-of-range, or incorrect valuesVerify error handling
ExtremeVery large or very small valuesStress test the system
AbsentMissing or null valuesVerify handling of missing data
Worked Example: Test Plan for a Score Entry System

A system accepts student scores (0--100) and calculates grades.

Test Case IDInputExpected OutputTest TypePurpose
TC-0185Grade: ANormalTypical input
TC-0260Grade: BNormalTypical input
TC-030Grade: FBoundaryMinimum valid
TC-04100Grade: ABoundaryMaximum valid
TC-05-1Error messageErroneousBelow range
TC-06101Error messageErroneousAbove range
TC-07"abc"Error messageErroneousNon-numeric
TC-08(empty)Error messageAbsentMissing input
TC-0979Grade: BBoundaryJust below A
TC-1080Grade: ABoundaryThreshold for A

Phase 5: Documentation

Types of Documentation

Document TypeAudienceDescription
User manualEnd usersHow to use the system (step-by-step instructions, screenshots)
Technical guideIT staffTechnical specifications, installation, configuration
System documentationDevelopersProgram code comments, design documents, data dictionary
Training materialsTraineesExercises, tutorials, quick reference guides
Operator guideSystem operatorsDay-to-day operation procedures, backup and recovery instructions
Help filesEnd usersContext-sensitive help, FAQs, troubleshooting

What Makes Good Documentation

CharacteristicDescription
Clear and conciseEasy to understand, avoids jargon
Well-organisedLogical structure with table of contents and index
Up to dateReflects the current version of the system
IllustratedIncludes screenshots, diagrams, and examples
AccessibleAvailable in multiple formats (print, online, PDF)
Version-controlledTracks changes between versions

Phase 6: Evaluation

Purpose

Evaluation assesses whether the system meets its original objectives and identifies areas for improvement.

Evaluation Criteria

CriterionDescriptionMeasurement Method
FunctionalityDoes the system do what it was designed to do?Testing against requirements specification
UsabilityIs the system easy to learn and use?User surveys, observation
EfficiencyDoes the system improve productivity?Time studies before/after
ReliabilityDoes the system operate without failures?Error logs, uptime monitoring
SecurityDoes the system protect data adequately?Security audit, penetration testing
Cost-effectivenessDo the benefits justify the costs?Cost-benefit analysis
User satisfactionAre users happy with the system?Surveys, interviews, feedback forms

Post-Implementation Review

A formal review conducted after the system has been in use for a defined period (typically 3--6 months).

Questions to address:

  1. Does the system meet all functional requirements?
  2. Are there any performance issues?
  3. What problems have users encountered?
  4. Are there any features that users want but were not included?
  5. Is the documentation adequate?
  6. What training gaps exist?
  7. What maintenance has been required?
  8. Was the project completed within budget and schedule?

Phase 7: Maintenance

Types of Maintenance

TypeDescriptionExample
CorrectiveFixing bugs and errors discovered after deploymentFix a calculation error in payroll system
AdaptiveModifying the system to work in a changed environmentUpdate software for new operating system
PerfectiveImproving the system's performance or adding new featuresAdd a reporting module requested by users
PreventivePerforming maintenance to prevent future problemsUpdate database indexes to maintain speed

Maintenance Costs

Maintenance typically accounts for 60--80% of the total cost of a system over its lifetime. This is often underestimated during the initial project planning.


Project Management

Gantt Charts

A Gantt chart is a horizontal bar chart that shows the project schedule. Each task is represented as a bar, with the bar's length proportional to the task's duration.

Task | Week 1 | Week 2 | Week 3 | Week 4 | Week 5 | Week 6 |
--------------------|--------|--------|--------|--------|--------|--------|
Analysis | XXXXXX | | | | | |
Feasibility Study | XXXX | | | | | |
Design | | XXXXXX | XXXX | | | |
Database Design | | XXXX | | | | |
Implementation | | | XXXXXX | XXXXXX | | |
Testing | | | | | XXXXXX | |
Documentation | | | | | XXXX | XXXX |
Training | | | | | | XXXX |
Evaluation | | | | | | XXXX |

Key elements of a Gantt chart:

ElementDescription
Task listAll project tasks listed on the vertical axis
TimelineTime periods (days, weeks, months) on the horizontal axis
Task barsHorizontal bars showing each task's start and end dates
DependenciesArrows showing which tasks depend on the completion of others
MilestonesDiamond markers indicating key dates or deliverables
Critical pathThe longest sequence of dependent tasks; determines the minimum project duration

Critical Path Analysis

The critical path is the sequence of tasks that determines the minimum project duration. Any delay on the critical path delays the entire project.

Float (slack): The amount of time a task can be delayed without affecting the project deadline.

  • Tasks on the critical path have zero float.
  • Tasks not on the critical path have positive float (can be delayed without affecting the deadline).

Project Management Tools

ToolDescriptionUse Case
Gantt chartVisual timeline of tasksScheduling and monitoring
PERT chartNetwork diagram showing task dependenciesCritical path analysis
CPMCritical Path Method -- identifies the longest task sequenceProject duration planning
** milestones**Key dates or deliverables marked on the timelineProgress tracking

Stakeholder Engagement

StakeholderInterest LevelInfluenceEngagement Strategy
Project sponsorHighHighRegular updates, executive summaries
End usersHighMediumSurveys, training, feedback sessions
ManagementMediumHighProgress reports, milestone reviews
IT departmentHighHighTechnical reviews, collaboration
External vendorsMediumMediumContract management, SLAs

Change Management

Why Change Management Matters

Introducing a new system changes how people work. Resistance to change is natural and must be managed to ensure successful adoption.

Strategies for Managing Change

StrategyDescription
CommunicationKeep all stakeholders informed about the change and its benefits
TrainingProvide comprehensive training before and after the change
InvolvementInvolve users in the design and testing phases
Champion identificationIdentify enthusiastic users who can advocate for the system
Phased introductionIntroduce changes gradually to reduce disruption
Feedback mechanismProvide channels for users to report issues and suggest improvements
Management supportVisible support from senior management

Reasons for Resistance to Change

ReasonDescription
Fear of redundancyConcern that the new system will eliminate jobs
Comfort with currentUsers are familiar with the existing system and reluctant to change
Lack of understandingUsers do not understand the benefits of the new system
Inadequate trainingUsers feel unprepared to use the new system
Poor past experiencesPrevious system changes were handled badly
Increased workloadThe transition period requires extra effort from users

Common Pitfalls

  1. Skipping the analysis phase: Jumping straight to implementation without proper requirements analysis leads to a system that does not meet users' needs. The analysis phase is critical for understanding the problem.

  2. Insufficient testing: Testing only with normal data misses edge cases and error conditions. A comprehensive test plan must include boundary, erroneous, extreme, and absent data.

  3. Poor documentation: Documentation is often treated as an afterthought. Inadequate documentation makes the system difficult to use, maintain, and support.

  4. Ignoring user feedback: Users who will operate the system daily often have practical insights that analysts miss. Ignoring their feedback leads to poor user acceptance.

  5. Underestimating maintenance: Many projects fail to budget for ongoing maintenance, which can account for 60--80% of total system costs over its lifetime.

  6. No feasibility study: Starting a project without assessing feasibility wastes resources on projects that may not be technically, economically, or operationally viable.

  7. Direct changeover without backup: Switching to a new system without a fallback plan is extremely risky. If the new system fails, there is no way to continue operations.

  8. Critical path not identified: Failing to identify the critical path means resources may not be allocated to the most time-sensitive tasks, causing project delays.

  9. Inadequate training: Users who are not properly trained will resist the new system, make more errors, and reduce the system's effectiveness.

  10. Scope creep: Adding new requirements during the project without adjusting the budget and schedule leads to delays, cost overruns, and incomplete systems.


Practice Problems

Question 1: SDLC Phases

A school is developing a new student attendance tracking system that uses RFID cards to record attendance automatically.

(a) For the analysis phase, describe two fact-finding methods that would be appropriate and explain what information each would gather.

(b) For the design phase, state two types of documentation that should be produced and describe their contents.

(c) Recommend an implementation method and justify your choice.

(d) Describe two types of testing that should be performed before the system goes live.

Answer:

(a) Interviews with teachers: Teachers currently take manual attendance. Interviews would reveal their current workflow, pain points (e.g., time wasted, inaccurate records), and what features they need in the new system.

Observation: Observe the morning registration process to understand exactly how attendance is currently recorded, how long it takes, and what problems arise (e.g., students forgetting to report, proxy attendance).

(b) Data flow diagram: Shows how attendance data flows from the RFID scanner through the system to the attendance database and then to reports viewed by teachers and parents.

User interface design: Mockups of the teacher's dashboard showing real-time attendance status, absence alerts, and report generation features.

(c) Pilot running. Start with one class or one form level as a pilot. This limits risk -- if the RFID system has issues (e.g., readers not scanning cards reliably), only the pilot group is affected. Problems can be identified and fixed before rolling out to the entire school.

(d) Integration testing: Test that the RFID readers communicate correctly with the central system, that attendance data is stored accurately in the database, and that the teacher dashboard displays correct information.

Acceptance testing: Teachers and administrators use the system in their normal environment and verify that it meets their requirements (accurate attendance recording, timely reporting, easy to use).

Question 2: Feasibility Study

A small retail shop with 3 employees is considering installing a barcode-based point-of-sale (POS) system to replace their current manual cash register.

(a) Assess the technical, economic, and operational feasibility of this project.

(b) Identify three stakeholders and explain their interest in the project.

Answer:

(a) Technical feasibility: Barcode scanners and POS software are mature, widely available technologies. The shop has a computer that can run POS software. A barcode scanner costs approximately HKD 500--2,000. Basic POS software is available from HKD 2,000--10,000. No specialised technical expertise is needed for installation. Assessment: Feasible.

Economic feasibility: One-time costs: POS software (HKD 5,000), barcode scanner (HKD 1,000), barcode printer for labels (HKD 1,500), training (HKD 1,000) = total HKD 8,500. Ongoing costs: software maintenance (HKD 1,000/year). Benefits: faster checkout (save ~5 min per transaction), reduced errors in pricing and change-giving, automatic inventory tracking, better sales reporting. Estimated savings: HKD 3,000--5,000/year. Assessment: Feasible. Payback period ~2 years.

Operational feasibility: With only 3 employees, training is minimal (1--2 days). The POS system simplifies their work (no manual price lookup, automatic inventory updates). Risk of resistance is low if employees see the time savings. Assessment: Feasible.

(b) Three stakeholders:

  1. Shop owner: Wants accurate financial reporting, inventory tracking, reduced errors, and increased profitability. Has the decision-making power.
  2. Shop assistants: Want a system that is easy to use, speeds up checkout, and does not create extra work. Concerned about learning a new system.
  3. Customers: Benefit from faster checkout, accurate pricing, and itemised receipts. Indirect stakeholders who do not use the system directly.
Question 3: Gantt Chart and Critical Path

A project has the following tasks and dependencies:

TaskDuration (weeks)Dependencies
A: Analysis3None
B: Design4A
C: Programming6B
D: Testing2C
E: Documentation3C
F: Training1D, E
G: Evaluation2F

(a) Draw the Gantt chart (describe the schedule in text).

(b) Identify the critical path.

(c) Calculate the minimum project duration.

(d) Task E is delayed by 1 week. Does this affect the project completion date? Explain.

Answer:

(a) Schedule (week numbers when tasks start):

TaskStartEndWeeks Active
A: Analysis131, 2, 3
B: Design474, 5, 6, 7
C: Programming8138--13
D: Testing141514, 15
E: Documentation141614, 15, 16
F: Training171717
G: Evaluation181918, 19

(b) Critical path: A --> B --> C --> D --> F --> G

(c) Critical path length: 3 + 4 + 6 + 2 + 1 + 2 = 18 weeks. Minimum project duration: 18 weeks.

(d) Task E (Documentation) has a duration of 3 weeks and is followed by Task F (Training). Task F depends on both D and E. Task D ends at week 15. If E is delayed by 1 week (now ends at week 17 instead of 16), F starts at week 17 (when both D and E are complete). Since F only takes 1 week and starts at week 17, and G follows at week 18, the overall project is NOT delayed. Task E had 1 week of float (E could finish as late as week 16 without affecting F's start at week 17). A 1-week delay uses up this float but does not extend the project.

Question 4: Testing and Evaluation

A hospital's new patient booking system has just been developed. The system allows patients to book appointments online, receive confirmation emails, and allows staff to manage the appointment calendar.

(a) Describe three types of testing that should be performed, with specific examples for this system.

(b) After 3 months of operation, describe how the hospital should evaluate the system.

(c) A user reports that when they try to book an appointment on 29 February (a non-leap year), the system accepts the booking. What type of test would have caught this error?

Answer:

(a) Unit testing: Test individual functions in isolation. For example, test the date validation function with various dates to ensure it correctly identifies valid and invalid dates.

Integration testing: Test that the booking module correctly interacts with the email module (confirmation emails are sent), the database (appointments are stored correctly), and the calendar display (booked slots are marked as unavailable).

Acceptance testing: Hospital staff and a sample of patients use the system to perform real tasks (booking an appointment, cancelling, rescheduling) and verify that the system meets their requirements.

Security testing: Test that unauthorised users cannot access patient records, that SQL injection attacks are prevented, and that patient data is encrypted in transit.

(b) Evaluation methods:

  1. User satisfaction survey: Ask patients and staff to rate the system on ease of use, reliability, and features.
  2. Quantitative metrics: Measure the reduction in phone calls for appointments, the percentage of appointments booked online vs phone, and the average time to book an appointment.
  3. Error log analysis: Review system logs for bugs, crashes, or failed transactions.
  4. Comparison with objectives: Compare actual performance against the original requirements specification (e.g., "reduce phone booking calls by 50%").

(c) This is a boundary test error. The date 29 February 2025 is invalid (2025 is not a leap year). Boundary testing should test dates at the edges of valid ranges, including the last day of February for both leap years and non-leap years. The test should verify that the system correctly rejects 29 February for non-leap years and accepts it for leap years.

Question 5: Change Management Scenario

A large company is replacing its legacy payroll system with a modern cloud-based system. Many employees have used the old system for 15+ years and are resistant to the change.

(a) Identify three reasons why employees might resist this change.

(b) Describe three change management strategies the company should use to overcome resistance.

(c) Explain why parallel running would be inappropriate for this project and recommend an alternative.

Answer:

(a) Three reasons for resistance:

  1. Comfort with the current system: Employees have used the old system for 15+ years and are highly proficient. They fear the learning curve of the new system.
  2. Fear of job loss: Employees may worry that automation in the new system will make their roles redundant.
  3. Disruption during transition: Learning the new system while performing their normal duties increases workload and stress. Mistakes during the learning period could affect payroll accuracy.

(b) Three change management strategies:

  1. Involvement: Include payroll staff representatives in the design and testing phases. When users help shape the system, they feel ownership and are more likely to accept it.
  2. Comprehensive training: Provide hands-on training sessions well before the go-live date. Offer refresher sessions and a help desk for the first few months. Training reduces anxiety and builds competence.
  3. Identify champions: Find enthusiastic, influential employees who can advocate for the new system among their peers. Positive word-of-mouth from trusted colleagues is more effective than top-down mandates.

(c) Parallel running is inappropriate because payroll involves sensitive financial data. Running two payroll systems simultaneously means processing payroll twice per month, which doubles the workload for payroll staff. Any discrepancy between the two systems creates confusion and requires manual reconciliation.

Alternative: Phased implementation. Start with one department or one payroll cycle (e.g., run the June payroll on the new system while running May on the old system). Verify results, fix issues, and then switch completely. This limits risk without the burden of full parallel running.