TOPIC: COLLABORATIVE SOFTWARE
From planning to production: Selected aspects of modern software delivery
Software delivery has never been more interlinked across strategy, planning and operations. Agile practices are adapting to hybrid work, AI is reshaping how teams plan and execute, and cloud platforms have become the default substrate for everything from build pipelines to runtime security. What follows traces a practical route through that terrain, drawing together current guidance, tools and community efforts so teams can make informed choices without having to assemble the big picture for themselves.
Work Management: Asana and Jira
Planning and coordination remain the foundation of any delivery effort, and the market still gravitates to two names for day-to-day project management: Asana and Jira. Each can bring order to multi-team projects and distributed work, yet they approach the job from very different histories.
With a history rooted in large DevOps teams and issue tracking, Jira carries that lineage into its Scrum and Kanban options, backlogs, sprints and a reporting catalogue that leans into metrics such as time in status, resolution percentages and created-versus-resolved trends. Built as a more general project manager from the outset, Asana shows its intent in the way users move from a decluttered home screen to “My Tasks”, switch among Kanban, Gantt and Calendar views using tabs, and add custom fields or rules from within the view rather than navigating to separate screens. The two now look similar at a glance, but their structure and presentation differ, and that influences how quickly a team settles into a rhythm.
Dashboards and Reporting
Those differences widen when examining dashboards and reporting. Jira allows users to create multiple dashboards and fill them with a large range of gadgets, including assigned issues, average time in status, bubble charts, heat maps and slideshows. The designs are sparse yet flexible, and administrators on company-managed accounts can add custom reporting, while the Atlassian Marketplace offers hundreds of additional reporting integrations.
By contrast, the home dashboard in Asana is intentionally pared back, with reports placed in their own section to keep personal task management separate from project or portfolio-level tracking. Its native reporting is broader and more polished out of the box, with pre-built views for progress, work health and resourcing, together with custom report creation that does not require admin-level access.
Interoperability
How well each tool connects to other systems also sets expectations. Jira, as part of Atlassian's suite, has a bustling marketplace with over a thousand apps for its cloud product, covering project management, IT service management, reporting and more. Asana's store is smaller, with under 400 apps at the time of writing, though it continues to grow and offers breadth across staples such as Slack, Teams and Adobe Creative Cloud, as well as a strong showing for IT and developer use cases.
Both tools connect to Zapier, which has also published a detailed comparison of the two platforms, opening pathways to thousands of further automations, such as creating Jira issues from Typeform submissions or making Asana tasks from Airtable records without writing integration code. In practice, many teams will get what they need natively and then extend in targeted ways, whether through marketplace add-ons or workflow automations.
Plans and AI
Plans and AI are where the most significant recent movement has occurred. On the Asana side, a free Personal tier leads into paid Starter and Advanced plans followed by Enterprise, with AI tools (branded "Asana Intelligence") included across paid plans. Those features help prioritise work, automate repetitive steps, suggest smart workflows and summarise discussions to reduce time spent on status communication.
Over at Jira, the structure runs from a free tier for small teams through Standard, Premium and Enterprise plans. "Atlassian Intelligence" focuses on generative support in the issue editor, AI summaries and AI-assisted sprint planning, adding predictive insights to help with resource allocation and automation. It is worth noting that Jira's entry-level paid plan appears cheaper on paper, but real-world total cost of ownership often rises once Marketplace apps, Confluence licences and security add-ons are factored in.
Choosing between the two typically comes down to need. If you want a task manager built for general use with crisp reporting and strong collaboration features, Asana presents itself clearly. If your roadmap lives and breathes Agile sprints, backlogs and issue workflows, and you need deep extensibility across a suite, Jira remains a natural fit.
Scrum: Back to Basics
Method matters as much as tooling. Scrum remains the most widely adopted Agile framework, and it is worth revisiting its essentials when translating plans into delivery. The DevOps Institute tracks the human side of this evolution, noting that skills, learning and collaboration are as central to DevOps success as the toolchain itself. A Scrum Team is cross-functional and self-organising, combining the Product Owner's focus on prioritising a transparent, value-ordered Product Backlog with a Development Team that turns backlog items into a potentially shippable increment every Sprint.
The Scrum Master keeps the framework alive, removes impediments, and coaches both the team and the wider organisation. Sprints run for no longer than four weeks and bundle Sprint Planning, Daily Scrums, a Sprint Review and a Retrospective, with online whiteboards increasingly used to run those ceremonies effectively across distributed and hybrid teams. The Sprint Goal provides a unifying target, and the Sprint Backlog breaks selected Product Backlog items into tasks and steps owned by the team.
Scrum Versus Waterfall
That cadence stands in deliberate contrast to classic waterfall approaches, where specification, design, implementation, testing and deployment proceed in long phases with significant hand-offs between each. Scrum replaces upfront specifications with user stories and collaborative refinement using the "three Cs" of Card, Conversation and Confirmation, so requirements can evolve alongside market needs. It places self-organisation ahead of management directives in deciding how work is done within a Sprint, and it raises transparency by making progress and problems visible every day rather than at phase gates.
Teams feel the shift when they commit to delivering a working increment each Sprint rather than aiming for a distant release, and when they see the cost of change flatten because feedback arrives through Reviews and Retrospectives rather than months after decisions have been made.
The State of Agile
Richer context for these shifts appears in longitudinal views of industry practice. The 18th State of Agile Report, published by Digital.ai in late 2025, observes that Agile is adapting rather than fading, with adoption remaining widespread while many organisations rebuild from the ground up to focus on measurable outcomes. The report, drawing on responses from approximately 350 practitioners, notes that AI and automation are accelerating change while introducing fresh expectations around data quality, decision-making and governance, and it emphasises that outcomes have become the currency connecting strategy, planning and execution.
That aligns with the Agile Alliance's ongoing work to re-examine Agile's core values for enterprise settings, as well as with the joint Manifesto for Enterprise Agility initiative with PMI{:target="_blank"}, which argues for adaptability as a strategic advantage rather than a team-level method choice. Significantly, the 18th report found that only 13% of respondents say Agile is deeply embedded across their business, and that only 15% of business leaders participate meaningfully in Agile practices, suggesting that leadership alignment remains one of the most persistent blockers to realising the framework's full potential.
Continuous Delivery and CI/CD Tooling
Getting from plan to production relies on engineering foundations that have matured alongside Agile. Continuous Delivery reframes deployment as a safe, rapid and sustainable capability by keeping code in a deployable state and eliminating the traditional post-"dev complete" phases of integration, testing and hardening. By building deployment pipelines that automate build, environment provisioning and regression testing, teams reduce risk, shorten lead time and can redirect human effort towards exploratory, usability, performance and security testing throughout delivery, not just at the end.
The results can be counterintuitive. High-performing teams deploy more frequently and more reliably, even in regulated settings because painful activities are made routine and small batches make feedback economical.
CI/CD in Practice
Contemporary CI/CD tools express that philosophy in developer-centred ways. Travis CI can often be described in minutes using minimal YAML configuration, specifying runtimes, caching dependencies, parallelising jobs and running tests across multiple language versions. Azure Pipelines, GitHub Actions and Azure DevOps provide similar capabilities at broader scale, with managed runners, gated releases, integrated artefact feeds, security scanning and policy controls that matter in larger enterprises.
The emphasis across these platforms is on speed to first pipeline, consistency across environments and adding guardrails such as signed artefacts, scoped credentials and secret management, so that velocity does not undercut safety.
Cloud Native Architecture
Architecture and platform choices amplify or constrain delivery flow. The cloud native ecosystem, curated by the Cloud Native Computing Foundation (CNCF) under the Linux Foundation, has become the common bedrock for organisations standardising on Kubernetes, service meshes and observability stacks. Hosting more than 200 projects across sandbox, incubating and graduated maturity levels, it spans everything from container orchestration to policy and tracing, and brings together vendors, end users and maintainers at events such as KubeCon + CloudNativeCon.
Sitting higher up the stack, Knative is a recent CNCF graduate that provides building blocks for HTTP-first, event-driven serverless workloads on Kubernetes. It unifies serving and eventing, so teams can scale to zero on demand while routing asynchronous events with the same fluency as web requests, and was created at Google before joining the CNCF as an incubating project and subsequently reaching graduation status. For teams that need to manage the underlying cluster infrastructure declaratively, Cluster API provides a Kubernetes-native way to provision, upgrade and operate clusters across cloud and on-premises environments, bringing the same declarative model used for application workloads to the infrastructure layer itself.
APIs and Developer Ecosystems
API-driven integration is part of the cloud native picture rather than an afterthought. The API Landscape compiled by Apidays shows the sheer diversity of stakeholders and tools across the programmable economy, from design and testing to gateways, security and orchestration. Developer ecosystems such as Cisco DevNet bring this to ground level by offering documentation, labs, sample code and sandboxes across networking, security and collaboration products, encouraging infrastructure as code with tools like Terraform and Ansible.
Version control and collaboration sit at the centre of modern delivery, and GitHub's documentation, spanning everything from Codespaces to REST and GraphQL APIs, reflects that centrality. The breadth of what is available through a single platform, from repository management to CI/CD workflows and AI-assisted coding, illustrates how much of the delivery stack can now be coordinated in one place.
Security: An End-to-End Discipline
Security threads through every layer and is increasingly treated as an end-to-end discipline rather than a late-stage gate. The Open-Source Security Foundation (OpenSSF) coordinates community efforts to secure open-source software for the public good, spanning working groups on AI and machine learning security, supply chain integrity and vulnerability disclosure, and offering guides, courses and annual reviews.
On the cloud side, a Cloud-Native Application Protection Platform (CNAPP) consolidates capabilities to protect applications across multi-cloud estates. Core components typically include Cloud Infrastructure Entitlement Management (to rein in excessive permissions), Kubernetes Security Posture Management (to maintain container orchestration best practices and flag misconfigurations), Data Security Posture Management (to classify and monitor sensitive data) and Cloud Detection and Response (to automate threat response and connect to security orchestration platforms).
Increasingly, AI-driven Security Posture Management sits across these layers to spot anomalies and predict risks from historical patterns, though this brings its own challenges around false positives and model bias that require careful adoption planning. Vendors such as Check Point offer CNAPP products including CloudGuard with unified management and compliance automation. While such examples illustrate what is available commercially, it is the architecture and functions described above that define the category itself.
Site Reliability Engineering
Reliability is not left to chance in well-run organisations. Site Reliability Engineering (SRE), pioneered and documented by Google, treats operations as a software problem and asks SRE's to protect, provide for and progress the systems that underpin user-facing services. The remit ranges from disk I/O considerations to continental-scale capacity planning, with a constant focus on availability, latency, performance and efficiency.
Error budgets, automation, toil reduction and blameless post-mortems become part of the vocabulary for teams that want to move fast without eroding trust. The approach complements Continuous Delivery by turning operational quality into something measurable and improvable, rather than a set of aspirations.
Code Quality, Testing and Documentation
For all the automation and platform power now available, the basics of code quality and testing still count. The Twelve-Factor App methodology remains relevant in encouraging declarative automation, clean contracts with the operating system, strict separation of build and run stages, stateless processes, externalised configuration, dev-prod parity and treating logs as event streams rather than files to be managed. It was first presented by developers at Heroku and continues to inform how teams design applications for cloud environments.
Documentation practices have also evolved, from literate programming's argument that source should be written as human-readable text with code woven through, to modern API documentation standards that keep codebases easier to change and onboard. General-purpose resources such as the long-running Software QA and Testing FAQ remind teams that verification and validation are distinct activities, that a spectrum of testing types is available and that common delivery problems have known countermeasures when documentation, estimation and test design are taken seriously.
AI in Software Delivery
No survey of modern software delivery can sidestep artificial intelligence. Adoption is now near-universal: the 2025 DORA State of AI-Assisted Software Development report, drawing on responses from almost 5,000 technology professionals worldwide, found that around 90% of developers now use AI as part of their daily work, with the median respondent spending roughly two hours per day interacting with AI tools. More than 80% report feeling more productive as a result. The picture is not straightforward, however. The same research found that AI adoption correlates with higher delivery instability, more change failures and longer cycle times for resolving issues because the acceleration AI brings upstream tends to expose bottlenecks in testing, code review and quality assurance that were previously hidden.
The report's central conclusion is that AI functions as an amplifier rather than a remedy. Strong teams with solid engineering foundations use it to accelerate further, while teams carrying technical debt or process dysfunction find those problems magnified rather than resolved. This means the strategic question is not simply which AI tools to adopt, but whether the underlying platform, workflow and culture are ready to benefit from them. The DORA AI Capabilities Model, published as a companion guide, identifies seven foundational practices that consistently improve AI outcomes, including a clear organisational stance on AI use, healthy data ecosystems, working in small batches and a user-centric focus. Teams without that last ingredient, the report warns, can actually see performance worsen after adopting AI.
At the tooling level, the landscape has moved quickly. Coding assistants such as GitHub Copilot have gone from novelty to standard practice in many engineering organisations, with newer entrants including Cursor, Windsurf and agentic tools like Claude Code pushing the category further. The shift from "copilot" to "agent" is significant: where earlier tools suggested completions as a developer typed, agentic systems accept a goal and execute a multistep plan to reach it, handling scaffolding, test generation, documentation and deployment checks with far less human intervention. That brings real efficiency gains and also new governance questions around traceability, code provenance and the trust that teams place in AI-generated output. Around 30% of DORA respondents reported little or no trust in code produced by AI, a figure that points to where the next wave of tooling and practice will need to focus.
Putting It Together
Translating all of this into practice looks different in every organisation, yet certain patterns recur. Teams choose a work management tool that matches the shape of their portfolio and the degree of Agile structure they need, whether that is Asana's lighter-weight task management with strong reporting or Jira's DevOps-aligned issue and sprint workflows with deep extensibility, then align on a Scrum-like cadence if iteration and feedback are priorities, or adopt hybrid approaches that sustain visibility while staying compatible with regulatory or vendor constraints.
Build, test and release are automated early so that pipelines, not people, become the route to production, and cloud native platforms keep environments reproducible and scalable across teams and geographies. Instrumentation ensures that security posture, reliability and cost are visible and managed continuously rather than episodically, and deliberate investment in engineering foundations, small batches, fast feedback and strong platform quality, creates the conditions that the evidence now shows are prerequisites for AI to deliver on its promise rather than amplify existing dysfunction.
If anything remains uncertain, it is often the sequencing rather than the destination. Few organisations can refit planning tools, delivery pipelines, platform architecture and security models all at once, and there is no definitive order that works everywhere. Starting where friction is highest and then iterating tends to be more durable than a one-shot transformation, and most of the resources cited here assume that change will be continuous rather than staged. Agile communities, cloud native foundations and security collaboratives exist because no single team has all the answers, and that may be the most practical lesson of all.
Getting to know Jira, its workflows, test management capabilities and the need for governance
Developed by Atlassian and first released in 2002 as a straightforward bug and issue tracker aimed at software developers, Jira has since grown into a platform used for project management across a wide range of industries and disciplines. The name itself is a truncation of Gojira, the Japanese word for Godzilla, originating as an internal nickname used by Atlassian developers for Bugzilla, the bug-tracking tool they had previously relied upon.
A Family of Products, Each With a Purpose
The Jira ecosystem has expanded well beyond its original single offering, and it is worth understanding what each product is designed to do. Jira (formerly marketed as Jira Software, now unified with Jira Work Management) remains the flagship, built around agile project management with Scrum and Kanban boards at its core. Jira Service Management serves IT operations and service desk teams, handling ticketing and customer support workflows; it originated as Jira Service Desk in 2013, following Atlassian's discovery that nearly 40 per cent of their customers had already adapted the base product for service requests, and it was rebranded in 2020. At the enterprise level, Jira Align connects team delivery to strategic business goals, while Jira Product Discovery helps product teams capture feedback, prioritise ideas and build roadmaps. Together, these products span the full organisational hierarchy, from individual contributors up to executive portfolio management.
Core Features
Agile Boards and Backlog Management
Jira supports a range of agile methodologies, with two primary project templates available to teams. The Scrum template is designed for teams that deliver work in time-boxed sprints, providing backlog management, sprint planning and capacity tracking in a single view. The Kanban template, by contrast, is built around a continuous flow of work, helping teams visualise tasks as they move through each stage of a process without the constraint of fixed iterations. Both templates support custom configurations for teams whose ways of working do not map neatly to either model.
Reporting and Analytics
Jira's reporting suite provides visibility into project progress through various charts and metrics. The Burndown chart tracks remaining story points against the time left in a sprint, offering an indication of whether the team is on course to complete its committed work. The Burnup chart takes a complementary view, tracking how much work has been completed over time and making it straightforward to compare planned scope against actual delivery. These tools are useful for identifying patterns in team performance, though they are most informative when used consistently over several sprints rather than in isolation.
Custom Workflows
Teams can design workflows that reflect their own processes, defining the states an issue passes through and the transitions between them. Automation rules can be applied to handle repetitive steps without manual intervention, reducing administrative overhead on routine tasks. This flexibility is one of the more frequently cited reasons for adopting Jira, though it does require ongoing governance to prevent workflows from becoming inconsistent or unwieldy as teams and processes evolve.
Jira Query Language
Jira Query Language (JQL) provides a structured way to search and filter issues across projects, enabling teams to construct precise queries based on any combination of fields, statuses, assignees, dates and custom attributes. For organisations that invest time in learning it, JQL is a practical tool for building custom reports and dashboards. It is also the underlying mechanism for many of Jira's more advanced filtering and automation features.
Integration Options
Jira connects with a range of tools both within and outside the Atlassian ecosystem. Confluence handles documentation, Bitbucket manages code repositories and links commits directly to Jira issues, and Loom, acquired by Atlassian in 2023, adds asynchronous video communication. Third-party integrations, including Zoom and a broad catalogue of tools available through the Atlassian Marketplace, extend this further for teams with specific requirements.
Test Management With Xray
Jira does not include dedicated test management functionality by default, and teams that need to manage structured test cases alongside their development work typically turn to the Xray plugin, one of the most widely used additions in the Atlassian Marketplace. Xray operates as a native Jira application, meaning it adds new issue types directly to the Jira instance rather than sitting as a separate external tool. The issue types it introduces include Test, Test Set, Test Plan and Test Execution, all of which behave like standard Jira issues and can be searched, filtered and reported on using JQL.
A key capability is requirements traceability: Xray links test cases directly to the user stories and requirements they cover, and connects those in turn to any defects raised during execution. This gives teams a clear picture of test coverage and release readiness without having to leave Jira or reconcile data from separate systems. Test executions can be manual or automated, and Xray integrates with CI/CD toolchains (including Jenkins and Robot Framework) via a REST API, allowing automated test results to be published back into Jira and associated with the relevant requirements.
Xray also supports Behaviour-Driven Development (BDD), enabling teams to write tests in Gherkin syntax and manage them alongside their other Jira work. For organisations already using Jira as their central project management tool, Xray offers a practical route to bringing QA activities into the same workflow rather than maintaining a separate test management system.
Who is Jira Best Suited For?
Jira is generally considered most suitable for larger teams that require detailed control over workflows, reporting and resource allocation, and that have the capacity to dedicate administrative effort to the platform. Smaller teams or those without a dedicated Jira administrator may find the learning curve significant, particularly when configuring custom workflows or working with more advanced reporting features. Pricing is subscription-based, with tiers determined by user count and deployment model (cloud-hosted or self-managed), which means costs can increase substantially as an organisation grows.
Project Types: Tailoring Access to Needs
Jira divides its project spaces into two categories that serve different audiences. Team-managed projects offer simplified configuration for smaller, autonomous teams that want to get started without involving a Jira administrator. Company-managed projects grant administrators full control over customisation, permissions and settings, making them more appropriate for enterprises with complex requirements and multiple teams operating within the same instance. The two types can coexist within the same deployment, giving organisations the option to apply different governance models to different teams as their needs dictate.
Strengths and Limitations
Jira's scalability is one of its more consistent strengths, in terms of both the size of the user base it can support and the complexity of workflows it can accommodate. Its query functions give teams a precise way to interrogate project data, and its breadth of integrations means it can be connected to most standard development and collaboration toolchains.
A significant consideration for any Jira deployment is the degree of upfront decision-making it requires. Because the platform places few constraints on how it is configured, teams must establish their own conventions around workflow design, issue hierarchy, naming and permissions before adoption begins in earnest. Without that groundwork, it is straightforward for individual teams to configure Jira in incompatible ways, making cross-team reporting difficult and creating inconsistencies that become harder to unpick over time. Organisations that treat Jira as something to be governed, rather than simply installed, tend to get considerably more out of it.
The principal technical limitation is its dependence on the wider Atlassian ecosystem. Advanced portfolio planning, capacity forecasting and cross-programme dependency management typically require either a higher-tier plan or additional tooling. Advanced Roadmaps (now called Plans) are available natively within Jira Premium and Enterprise, providing cross-team timeline planning and scenario modelling. For capacity planning, budget tracking and timesheet management, many organisations turn to third-party Marketplace tools such as Tempo. Teams evaluating Jira should factor in both the cost of the appropriate licence tier and any supplementary tooling they are likely to need.
Where to Go From Here
Jira has grown considerably from the issue tracker it was when first released in 2002, and is now used by over 300,000 organisations worldwide. Its capabilities are broad, and its configurability makes it adaptable to a wide range of team structures and workflows. That same configurability, however, means the platform rewards investment in setup and ongoing administration, and organisations should assess whether they have the resources to realise that potential before committing. For those looking to explore further, Atlassian's official guides, its wider documentation, the support portal, the Atlassian Community and the developer documentation are useful starting points, and there are courses from an independent provider too.
Generating Git commit messages automatically using aicommit and OpenAI
One of the overheads of using version control systems like Subversion or Git is the need to create descriptive messages for each revision. Now that GitHub has its copilot, it now generates those messages for you. However, that still leaves anyone with a local git repository out in the cold, even if you are uploading to GitHub as your remote repo.
One thing that a Homebrew update does is to highlight other packages that are available, which is how I got to learn of a tool that helps with this, aicommit. Installing is just a simple command away:
brew install aicommit
Once that is complete, you now have a tool that generates messages describing very commit using GPT. For it to work, you do need to get yourself set up with OpenAI's API services and generate a token that you can use. That needs an environment variable to be set to make it available. On Linux (and Mac), this works:
export OPENAI_API_KEY=<Enter the API token here, without the brackets>
Because I use this API for Python scripting, that part was already in place. Thus, I could proceed to the next stage: inserting it into my workflow. For the sake of added automation, this uses shell scripting on my machines. The basis sequence is this:
git add .
git commit -m "<a default message>"
git push
The first line above stages everything while the second commits the files with an associated message (git makes this mandatory, much like Subversion) and the third pushes the files into the GitHub repository. Fitting in aicommit then changes the above to this:
git add .
aicommit
git push
There is now no need to define a message because aicommit does that for you, saving some effort. However, token limitations on the OpenAI side mean that the aicommit command can fail, causing the update operation to abort. Thus, it is safer to catch that situation using the following code:
git add .
if ! aicommit 2>/dev/null; then
echo " aicommit failed, using fallback"
git commit -m "<a default message>"
fi
git push
This now informs me what has happened when the AI option is overloaded and the scripts fallback to a default option that is always available with git. While there is more to my git scripting than this, the snippets included here should get across how things can work. They go well for small push operations, which is what happens most of the time; usually, I do not attempt more than that.