Appearance
Phase 1: Research
The Research phase is the foundation of every successful software project. It transforms a vague idea or business need into a concrete, actionable plan. Teams that invest deeply in this phase avoid costly pivots, misaligned features, and wasted engineering effort downstream.
Understanding the Problem
Every application exists to solve a problem or fulfill a need. Before exploring solutions, the team must articulate exactly what that problem is — and for whom.
Stakeholder Discovery
Stakeholder discovery identifies everyone who has a vested interest in the project's outcome. This includes direct users, business sponsors, support teams, compliance officers, and anyone whose workflow will be affected. Each stakeholder group brings a different perspective, and failing to include even one can result in blind spots that surface late in development.
Effective discovery techniques include one-on-one interviews that allow stakeholders to speak candidly about pain points, workshop sessions that bring cross-functional groups together to map current processes, and shadowing or observation sessions where team members watch users perform their tasks in real environments. The output is typically a stakeholder map that categorizes each group by their level of influence and interest, ensuring the right people are consulted at the right times.
Problem Statement
A well-crafted problem statement is specific, measurable, and free of assumed solutions. Compare "we need a new dashboard" (solution-oriented) with "regional sales managers spend an average of three hours per week manually compiling performance data from four separate systems, delaying decision-making and increasing error rates" (problem-oriented). The second version gives the team clear criteria for success and opens the door to multiple possible solutions.
Defining Success Metrics
Before any design or development work begins, the team should agree on how success will be measured. These metrics might include quantitative targets such as reducing task completion time by 50%, achieving a Net Promoter Score above 40, or reaching 10,000 monthly active users within six months. They might also include qualitative goals like improving user satisfaction or reducing support ticket volume for a specific workflow. Defining these metrics early ensures that every subsequent decision can be evaluated against a clear standard.
Requirements Gathering
Requirements translate the problem statement into a structured description of what the application must do and how it must behave.
Functional Requirements
Functional requirements describe the specific behaviors and capabilities of the system. They answer the question "what does the application do?" and are typically expressed as user stories, use cases, or detailed specification items.
A user story follows the format: "As a [type of user], I want to [perform an action] so that [I achieve a goal]." For example, "As a hiring manager, I want to filter candidates by skill set so that I can quickly identify qualified applicants." Each story should include acceptance criteria — concrete conditions that must be true for the story to be considered complete. Well-written acceptance criteria eliminate ambiguity and give developers and testers a shared understanding of "done."
Use cases provide a more detailed narrative of how a user interacts with the system, including the main success scenario, alternative paths, and exception handling. They are especially valuable for complex workflows with multiple decision points.
Non-Functional Requirements
Non-functional requirements define the quality attributes of the system. They often have a greater impact on architecture decisions than functional requirements. Key categories include performance (response times, throughput, and latency targets under normal and peak load), scalability (the ability to handle growth in users, data volume, or transaction frequency without degradation), reliability and availability (uptime targets, often expressed as a percentage like 99.9%, along with recovery time and recovery point objectives), security (authentication, authorization, data encryption, compliance with standards like SOC 2, HIPAA, or GDPR), and maintainability (code quality standards, documentation expectations, and ease of onboarding new developers).
Prioritization
Not all requirements are equally important. Prioritization frameworks help the team focus on what matters most. The MoSCoW method categorizes requirements as Must Have (non-negotiable for launch), Should Have (important but not critical), Could Have (desirable if time permits), and Won't Have (explicitly out of scope for this release). The Kano model classifies features by their impact on user satisfaction, distinguishing between basic expectations, performance features that increase satisfaction linearly, and delight features that create outsized positive reactions. Weighted scoring assigns numerical values based on business impact, user value, technical risk, and implementation effort, then ranks requirements by their composite score.
User Research
Understanding users is not the same as gathering requirements from stakeholders. User research involves direct engagement with the people who will actually use the application day to day.
Personas
Personas are fictional but data-driven representations of key user segments. A strong persona includes demographic information, job responsibilities, goals and motivations, frustrations and pain points, technical proficiency, and the context in which they will use the application. Personas keep the team grounded in real user needs rather than designing for an abstract "user." They are most effective when based on actual interview and survey data rather than assumptions.
User Interviews and Surveys
User interviews are semi-structured conversations that explore how people currently accomplish their tasks, what frustrates them, and what an ideal solution would look like. Open-ended questions like "walk me through how you handle X today" yield richer insights than closed questions. Surveys complement interviews by reaching a larger audience and providing quantitative data on preferences, frequency of tasks, and satisfaction levels.
Journey Mapping
A user journey map visualizes the end-to-end experience a user has when accomplishing a goal, including every touchpoint, action, thought, and emotion along the way. Journey maps expose friction points, redundant steps, and moments of confusion that might not surface in a requirements document. They also highlight opportunities to delight users by simplifying or automating painful steps.
Wireframes and Prototypes
Low-fidelity wireframes sketch out the basic layout and flow of the application without visual design. They are fast to create and easy to iterate on, making them ideal for early validation. Interactive prototypes built with tools like Figma, Sketch, or Adobe XD add clickable navigation, allowing users to experience the flow before any code is written. Usability testing with prototypes is one of the highest-value activities in the Research phase, as it can reveal fundamental design flaws when they are still cheap to fix.
Market and Competitive Analysis
For commercial products, understanding the market landscape informs positioning, feature prioritization, and go-to-market strategy. Even for internal tools, competitive analysis can reveal existing solutions that might be adopted or adapted rather than built from scratch.
Competitive Landscape
A competitive analysis catalogues direct competitors (products solving the same problem for the same audience), indirect competitors (products solving the same problem differently or for a different audience), and potential substitutes (manual processes, spreadsheets, or workarounds users currently rely on). For each competitor, the analysis should capture core features and capabilities, pricing model, strengths and weaknesses based on user reviews and product trials, market share and traction, and technology stack if publicly known.
Gap Analysis
Comparing competitor offerings against the requirements gathered earlier reveals gaps — areas where no existing solution adequately addresses user needs. These gaps represent the strongest opportunities for differentiation and should heavily influence the product roadmap.
Build vs. Buy vs. Adapt
Not every problem requires a custom-built solution. The research phase should honestly evaluate whether it is more effective to build a bespoke application, buy a commercial off-the-shelf product and configure it, or adapt an open-source project to meet the team's needs. This decision should weigh total cost of ownership (including maintenance, licensing, and opportunity cost), time to value, customization requirements, and long-term strategic importance.
Technical Research and Architecture Planning
Technical research validates that the proposed solution is feasible and identifies the best tools and patterns for the job.
Technology Stack Evaluation
Selecting a tech stack involves balancing many factors. Language and framework choice should consider team expertise, ecosystem maturity, performance characteristics, and hiring market availability. Database selection depends on data model complexity, query patterns, consistency requirements, and scale expectations (relational databases like PostgreSQL for structured, transactional data; document stores like MongoDB for flexible schemas; time-series databases like InfluxDB for metric-heavy workloads). Infrastructure decisions around cloud provider, containerization, orchestration, and serverless versus traditional compute shape both cost and operational complexity.
Proof of Concept and Technical Spikes
A proof of concept (PoC) is a small, focused experiment designed to validate a specific technical assumption. For example, a team considering real-time collaboration features might build a PoC using WebSockets to confirm that latency targets are achievable. Technical spikes are time-boxed research tasks (typically one to three days) aimed at answering a specific question, such as "can this third-party API handle our expected request volume?" PoCs and spikes reduce risk by surfacing technical constraints before they derail the project.
Architecture Decision Records
An Architecture Decision Record (ADR) is a short document that captures a significant architectural choice, the context in which it was made, the options considered, the decision reached, and the consequences (both positive and negative). ADRs create a searchable history of "why" decisions were made, which is invaluable when the team revisits choices months or years later.
Risk Assessment and Planning
Every project carries risk. The Research phase is the right time to identify, categorize, and plan for the most significant threats.
Risk Identification
Common risk categories include technical risk (unproven technology, integration complexity, performance uncertainty), resource risk (team availability, skill gaps, vendor dependencies), schedule risk (unrealistic timelines, external dependencies, regulatory deadlines), and scope risk (unclear requirements, stakeholder disagreement, feature creep). A risk register documents each identified risk along with its likelihood, potential impact, and the team member responsible for monitoring it.
Mitigation Strategies
For each significant risk, the team should define a mitigation strategy. Mitigation might involve building a PoC to retire technical uncertainty, cross-training team members to reduce key-person dependencies, padding the schedule with buffer time for high-uncertainty tasks, or defining a minimum viable product (MVP) scope that can be delivered even if resources are constrained.
Project Planning
The Research phase culminates in a project plan that includes a work breakdown structure dividing the project into manageable deliverables, a timeline with milestones marking key checkpoints, resource allocation mapping team members to workstreams, and a communication plan defining how progress will be reported and to whom. This plan is a living document that will evolve as the project progresses, but having a well-informed starting point dramatically improves the team's ability to execute.
Key Deliverables
By the end of the Research phase, the team should have produced a clear problem statement and success metrics, a prioritized requirements document (PRD, user stories, or specification), user personas and journey maps, wireframes or interactive prototypes validated through usability testing, a competitive analysis and build/buy/adapt recommendation, a technology stack recommendation supported by PoC results, architecture decision records for foundational choices, a risk register with mitigation strategies, and a project plan with timeline and resource allocation.
These deliverables form the contract between the Research and Development phases. They ensure that when the team begins coding, they are building the right thing, for the right people, with the right tools.