Thoughts on Scrum

Categories: Programming, Architecture

Introduction

A few years ago I started working at a company that loosely uses Scrum-like team organisation for software development. I’ve worked on around half-a-dozen significant projects in the past (various employers or customers) that used Scrum-like processes (more or less), and was never particularly convinced, but thought I should look at it more closely again. In particular, I read the book The Scrum Field Guide by Mitch Lacey. This article is a (rather late) review of the book, and my personal critical review of the concepts of Scrum in general.

The book is certainly well written, and interesting for those with basic-to-moderate knowledge of Scrum practices.

These notes are mostly for my own reference - I’m certainly not a recognised expert in this area (though I have been in the IT industry for over 25 years, and experienced a wide range of successful, not-so-successful, and completely failed projects). I am a fan of Agile development in general (having lived through many failed waterfall-based projects), though not any particular methodology. I also work regularly on open-source projects which are typically agile-ish (if they can be described as having a development process at all).

A Summary

What I agree with about Scrum in general, and the book The Scrum Field Guide in particular:

  • do development in teams of 4-9 people, who together should have most (ideally all) of the skills necessary to create, test, and deliver the product.
  • give reasonable autonomy to teams and even team members, ie have most decisions made by the people who are going to implement those decisions, rather than having decisions made externally and imposed on the people affected by them.
  • record requirements as a priority-ordered list, where the higher the priority the earlier it gets developed, and the more detailed the documented requirements are (“project backlog”)
  • include a domain expert in the team (“product owner”)
  • regularly release production-quality code to customer (whether internal or external)
  • regularly get feedback from customer (real end users)
  • regularly review backlog and ensure there are enough implementation-ready items waiting to be done
  • regularly review the development process (“retrospectives”)
  • projects should never run longer than a year
  • work items should be considered “done” only when tested, documented, and in all ways ready for production

What I partially agree with:

  • daily standups
  • sprints
  • assigning “story points” to the items in the project backlog (ie the requirements)
  • measuring team velocity
  • having a dedicated team organiser (“Scrum Master” - previously known as a “Project Manager”)

What I definitely do not agree with:

  • assigning features to “sprints” (“commitment”)
  • burn-down charts and related concepts (not technically part of Scrum but commonly applied)

Interesting that actually I agreed with far more of the book (and thus Scrum in general) than I had expected. Leave out the “assign tasks to the next sprint” step and I have very little problem with the methodology.

The Scrum Field Guide does repeatedly warn against “customising” Scrum - exactly what I am suggesting with the partially-agree/do-not-agree lists above. I make these suggestions therefore with caution - but IMO with good reason.

Believability, Scepticism and Caution

First, a few note about putting trust in information sources such as a book or article about Scrum.

When anyone makes claims, and particularly when they try to convince me of something I like to consider their possible motives.

The Agile Manifesto is a simple document produced by a group of well-regarded software experts, and there is no obvious money being made directly from it. Ok, the people whose names are on the original version of the document do perhaps get a few more invitations to speak at conferences, and it certainly is a door-opener when offering consulting services to potential customers - but the effect does not seem to be large or direct. I therefore am inclined to think that the contents of the Manifesto truly reflect their beliefs about good software development principles after long careers in the industry.

Scrum, however, is something else. There is a whole industry of Scrum mentoring, conferences, books, and software products. The big names in this industry are making lots of money, and therefore I prefer to look at the proposed concepts of Scrum with care before accepting it as truth. I don’t mean that it is wrong - just that there are motivations for people to promote it regardless of its correctness.

Similarly, those who have invested in Scrum training, are Certified Scrum Coaches, and have built businesses around that knowledge have strong incentives to promote it. It would take a lot of personal integrity for someone with a career in “Scrum” to say “actually, I’ve come to realise that it doesn’t work; I’ll go do something else now”. Even deviating from the Scrum doctrine in just a few areas could be damaging to a career.

There is also a kind of cognitive bias where anyone who has invested money or time into learning something is less likely to view that knowledge critically (see flaws) than an outsider, as that would mean admitting their investment is (at least partially) worthless. It takes a relatively strong personality to step away from something that cost effort to obtain. As with any community, there is also the influence of groupthink.

I certainly don’t mean to insult any professional Scrum authors or practitioners with any of the above. I just mean that if you go into an Adidas store looking for running shoes, you are not likely to be offered a balanced and impartial comparison of the options. That doesn’t mean that Scrum in general, and the author of this book in particular, are not 100% correct. They may well be, and certainly have more experience trying to apply Scrum than I have. Nevertheless, I wanted to evaluate the principles of Scrum against my own experience.

Fully Convinced

Things about Scrum in general, and the Scrum Field Guide in particular, that I completely agree with..

Team Size

It just seems to be a part of human nature that we work best in teams of less than 10. Any attempt to form larger groups just ends up with subgroups naturally forming. And it really is important for morale and communication that the team members understand how others in the team think and work. In addition, it is important to include all members of a team in discussions and decision-making - which is simply impractical with more than 10 people.

When more people are needed on a project, then there need to be multiple teams, with communication and collaboration within each team being informal and efficient while inter-team communication is more formal and therefore less efficient. The work to be done then needs to be distributed across teams so that inter-team communication happens less often than intra-team.

Individual and Team Autonomy

Very few people like following orders. People in the IT industry are generally required to be informed, self-motivated, creative, and to have pride in their work - the kind of people who like following orders even less than the average.

People “in the trenches” often also have information that others do not - and therefore are in a better position to make decisions optimal for their cirumstances.

The implication is that decision-making about how to implement something (rather than what to implement) should be largely done within a team.

On the other hand, software developers focused on day-to-day work often do not have the time to look at “the big picture”, or follow the latest technologies. There therefore needs to be some decision-making at a more abstract cross-team level to produce company-wide consistency and long-term improvements. Such decisions can however be made collaboratively; the existence of this level of decision-making does not necessarily imply the traditional hierarchical “architecture board” that lays down rules for all.

As far as I know, Scrum in general, and the Field Guide in particular, do not have a lot to say about this topic. However this approach at least does not conflict with the Scrum principles, and is IMO generally accepted as “best practice” in modern IT organisations.

Requirements as a priority-ordered list with rough estimates

The waterfall approach starts development with a full requirements document, complete to the point that developers should not need to ask the customer any questions at all; the delivered software just needs to match the documented requirements. This was always nonsense.

Instead, break requirements down to coarse-grained items. Then roughly estimate the size of the items (see “story points”). Ensure that any “technical features” which are pre-requisites of the business features are also present (eg the need for an authentication system). The product owner should have an idea how much money that feature will bring (new dollars earned, or existing expenses saved); the items with the greatest profit/development-effort should be delivered first ie be assigned highest priority - though many other factors can also affect the ordering.

Then the requirements at the top of the list need to be defined in sufficient detail that a developer can work on them, and QA (if it exists) can test them. Requirements that are likely to be worked on within the next month need some more detail, and requirements that are low on the priority list can be quite vague. The customer should be free to add new items to the list, or reorder the list, while the project is running. Of course that will affect the delivery dates for items later on the list, and the cost to “deliver all items” but that is obvious to all.

In order to assign reasonable priorities, it is necessary to have a feel for relative development-times - even for features that currently are only vaguely specified. However of course actual “days until delivery” is not possible; the “story points” approach seems to be the best solution to this at the moment - figuring out whether some feature is “as complex as” or “5 times more complex than” another feature is not too unreasonable. The trick is to then resist pressure from management to convert this into “delivery dates”; see later for more thoughts on estimations.

This view is pretty much consistent with Scrum’s concept of a backlog.

Domain Expert/Product Owner

Developers sometimes need to talk to someone who knows the requirements; even the best “requirements documents” are not complete.

There needs to be a first phase before development starts in which the domain expert gathers the requirements at a rough level and forms the first prioritised requirements list; there is no need to involve developers in this. Involving an architect might be worthwhile though. And the proposed architecture should then be discussed with the development team, reviewed for plausibility, and alternatives discussed. Remember that developers should have a fair bit of autonomy and input into the software they are developing.

Before implementation of a requirement starts, it then needs to be refined to a level at which code can be written; this should be done as a collaboration between a domain expert, the people writing the actual code, and the people defining automated tests (if they are different). There needs to be trust and good communication between the domain expert and the developers, and the domain expert needs to be available for follow-up questions. This means that the domain-expert for a project should be part of the team, not external.

In the case of “bespoke software development” where the development team works for a different company than the end customer, can a customer employee act as the team domain expert? I think not; firstly the level of trust needed is hard to obtain cross-company. Each side needs to be able to express their true concerns here without worrying about their words being escalated to senior management and becoming part of an inter-company disagreement. In addition, the domain expert needs to sit with the team much of the time during the project - the kind of “off-site” work that experienced people generally have no interest in doing.

IMO, this “Scrum product owner” is simply what used to be known as a “business requirements analyst”. The primary difference from the old role is that development starts before all requirements have been completely documented; instead the requirements analyst works in a “just in time” model, aiming to stay just a few steps ahead of the development team. This approach:

  • brings the ability to change requirements without discarding lots of existing work
  • solves arguments about “is this a requirement for this project”; any requirement can be added to the list. The argument then simply becomes “what is the priority relative to other items” which is easier to solve. And low-priority items don’t need to be specified in detail until they near the top of the list ie it becomes clear that they really are going to be implemented.
  • solves the problem of the customer domain experts running and hiding from the development team in order to avoid “being responsible” for any incorrectly-documented requirements. In particular, “getting signoff on the requirements” in a waterfall model was always very difficult, as nobody wants to be the fall-guy for mistakes. Having requirements defined just-in-time, in a somewhat less formal manner, and without big “signoffs”, reduces this pressure.
  • allows requirements to be defined relative to the current functionality of the new project (as visible in the current release). It is far easier to picture the “next step” for software than to picture “step N” of something that does not yet exist.

Regularly release production-quality code

I’ve worked on a couple of projects that used the “big bang delivery style” and they did not end well. When a developer can get away with marking an item as “done” when it is really only 60% there, then at some time there is going to be a nasty surprise; instead “done” really does need to mean tested, documented, and releasable.

Providing regular packages of software to the customer that they could install into production (whether they actually do or not) builds a very strong feeling of trust.

And having code in a test-environment that can be accessed by domain experts (including ideally people who would use the new system in production) is a pre-requisite for iterative requirements analysis (see above) and quality feedback (see next).

Regularly get feedback from customer

This is almost a no-brainer. It’s a core principle of agile development in general, and Scrum in particular.

Quite when and how to get feedback is the question; Scrum couples this to the length of the sprint which IMO is not absolutely necessary, but also not unreasonable.

The Scrum Field Guide makes an excellent point that in a 6-month project using Scrum with 4-week “sprints”, that are just 5 opportunities for the customer to give feedback about whether what is being built is actually what they need. That is probably “just enough” - but even more often would be better, eg delivering every two weeks or every week. Best of all is to have a customer-accessible environment which is updated immediately after each feature is marked as done.

Abstractly imagining how software will work before it exists is a hard task; software architects are supposedly specialists in this but often make mistakes too. Expecting domain experts to do this is unreasonable. Instead, give them a system that works and you’ll get a far better idea of whether what is being created is actually what is needed.

Waterfall often delivered “what was specified but not what was wanted”. And businesses and the environment in which they operate do change rapidly. Feedback during the project, based upon concrete deliveries of “the current state” is truly critical.

Of course, the customer’s staff have other work to do than looking at the software-in-progress every day, and the team need to ensure that they are getting feedback. It therefore seems reasonable to deliver new code on a calendar basis (eg every N weeks) and to ask the customer for feedback on the new release on the same schedule. Scrum delivers “at the start of each sprint” and requests feedback at the same time which is not the only solution, but a reasonable one.

Regularly review backlog

As noted earlier, requirements are best represented as a priority list where the high-priority items (next to be implemented) are documented in detail (as partnership of domain expert and developers) while lower-priority items are left vaguer.

The product owner then needs to “keep one step ahead” of the developers, ensuring there are always items available to work on.

While theoretically the maintenance of this list could be “trigger-based”, eg done only when the customer requests a change or the number of implementable items drops below N, it is probably best to instead check the backlog on a regular calendar schedule - particularly as developers need to interrupt their regular work to participate in design and prioritisation, possibly together with external assistants such as architectural experts.

Scrum always performs this task “at the start of each sprint” which is not the only solution but a reasonable one.

Regularly review the development process

Much of the stuff that Scrum describes has been done for decades by competent teams, and I’d certainly experienced it before Scrum was a thing. However the concept of regular “retrospectives” dedicated to thinking about how the team’s processes could be improved was for me something that was a revelation - so obvious, but not something that I’d seen before.

And it should just be done. On a regular calendar basis. Scrum performs this “at the end of each sprint” which for me feels a bit too often (particularly with short sprints), but regularly scheduled retrospectives are truly great.

Projects should never run longer than 1 year

Well, the Scrum Field Guide does not explicitly rule this out, but does recommend strongly against it. I would also agree; if something is not truly in production within a year then the chances it will be relevant when complete are very low.

My personal feeling is that two-person-years of development is about the maximum, ie two developers for a year, four for 6 months, etc. Get the stuff released, and start a new project to continue the work.

It’s also just a morale-boost to “finish something” and have a party. It’s also a good time for developers to consider moving on to new roles, new teams, etc. Or to consider new architectures and technologies for the next phase. Projects that just run and run can lead to a feeling of boredom, lack of motivation, and stale thoughts.

Done only when Done

The “definition of done” process in Scrum seems somewhat over-complicated to me, but the idea is generally good.

I remember working on one project that had awards for “the most productive developer of the week”. Management kept giving it to the same developer every week, because he closed more tickets than anyone else. The rest of us were too busy fixing up his poor code.

If marking a ticket as “done” does not mean it is production-quality, then how can the progress to production-quality code be tracked?

A ticket can potentially be handed-off from one team member to another (eg dev to testing) but:

  • that can potentially lead to disagreements and disrespect within the team (“testing this is your problem, not mine”)
  • it can lead to bottlenecks/backlogs where lots of tickets are “partially complete”

This is a good reason to combine the roles of development, testing, and release-management. Or at least to have extended “handover periods” where the person handing the work on works together with the next person in the chain as a pair.

Partially Convinced

Things about Scrum in general, and the Field Guide in particular, that I find are at least partially justifiable and would apply in some conditions, or with some modification.

Daily Standups

Like a lot of people I’ve talked to, I’ve experienced a lot of poorly-structured “daily standups”. It could be claimed that this is not a flaw of Scrum, but instead a failing of the users of it. Nevertheless, any process which tempts people to use it wrong is a bad idea; Communism is a great concept in theory which fails in practice because it just runs against human nature. It seems to me that many people’s nature is not compatible with effective standups, with communication being either too brief or far too verbose…

Standups are a major interruption to the “flow” of coding; I seldom get any significant technical work done in the 15 minutes before a meeting, or the 15 minutes after one. When the meeting itself is 15 minutes that makes 45 minutes per day away from technical tasks (per team member)!

Quite often the content of these meetings is pretty boring too.

On the other hand, it is a great opportunity to inform the whole team of relevant info. Rituals are also simply part of human nature, and are very effective in binding groups together.

An alternative to standups is some kind of IRC-like (shared chat channel) system, where everybody posts their status at regular intervals (email is not really good for this, nor are wikis). IMO this approach has some real benefits, including ensuring that the daily meetings do not revert to the common “we are all reporting to the Project Manager/Scrum Master” pattern . However some people just aren’t good communicators; having a physical meeting with verbal reporting is more effective with such team members. And online status reports just do not produce the same feeling of “we are in this together”.

Personally, for any “agile” team (whether Scrum or not) I would seriously consider proposing two physical standups per week (tue/thu), and “mandatory online status updates” on the other days. When combined with the other standard meetings (“backlog grooming”, retrospectives, etc.), I feel this suffices. It also works well when some team members are remote, ie the “physical standup” includes a video-call (always more time-intensive). It may also work better when some team-members are only part-time, ie are members of more than one team concurrently.

If this suggestion is implemented, but the two standups per week regularly run too long, and due to content that really does belong in a standup (the three questions1), then that is evidence that for this team and this project daily standups (5 per week) would be appropriate.

As a final note, if standups are implemented then I strongly support keeping them short (as Scrum makes clear). In particular:

  • do stand up, never sit down
  • address only the “three questions”; postpone all else until later
  • don’t have the meetings in a room far from the work areas
  • don’t use a computer as part of the standup (eg to display a JIRA board); that just leads to everyone watching while the projector is set up and items are dragged around on a screen. The point is to increase communication, not do paperwork.

Sprints

As mentioned above, there are many admin tasks that are commonly done on a calendar schedule:

  • packaging software for installation (if this is not automated and done continuously per-commit)
  • installing software into a customer-visible environment (if this is not automated and done continuously per-commit)
  • asking customers to review the latest software state and provide feedback
  • checking the requirements backlog and ensuring enough items are detailed enough for development
  • doing retrospectives
  • reporting progress to management/customer (duty of “Scrum Master” and “Product Owner”)

These processes do not necessarily have to be done at the same frequency, but it does get harder to track them if they are not. Scrum simply chooses a single “sprint length” and then ensures each of these is done once per sprint. That’s a little clumsy in some respects but also simple to remember. In general, therefore, having “sprints” in which each of these is done makes sense to me.

IMO, what does not make sense is “assigning tickets to a sprint”, ie linking the concept of “work done” to the scheduled tasks above. Or in other words, the concept of planning what tasks are going to be done for an upcoming time period. I lean towards the “Kanban continuous flow” approach, where team members just pick work off the “requirements backlog” without a “sprint backlog” (see later). Note that skipping this does not prevent calculating “team velocity”; see later.

Where software is being developed for a customer, and deployment into an environment in which the customer can evaluate changes is something that cannot be integrated into a “continuous delivery” system, then sprints make sense; they are the “release unit” and “feedback granularity unit”. Scrum was first published in 1995, when this situation was presumably very common.

On the other hand, if continuous delivery is being practiced and customer feedback is being continuously gathered, then it may be sensible to simply do away with sprints and do the other tasks “on demand” - eg reviewing/refining/restocking the backlog when it drops below a specific size, or when new ideas are raised, rather than at specific times.

I have noticed that the “start of sprint” and “end of sprint” tasks often seem to be heavily centered around “reporting to management”. If this is the case for a particular team, it might be worth looking at whether the ideas of agile development are truly being applied - ie whether the team is perhaps being “over-managed”.

One other possible purpose for a sprint is to package some useful “business value” which is larger than an individual task. However it seems easier to me to do this more directly, simply delivering software as soon as the set of tasks associated with it are done. This avoids delays in delivering the feature (waiting until end of sprint), or unnecessary pressure when the end-of-sprint is approaching and the set of tasks isn’t quite complete. Only when “software delivery” is hard to do does the sprint-oriented approach make sense as a solution, but there it might be more useful to work on better delivery processes than to introduce inefficiencies in the development process.

Assigning Story Points

Estimating software implementation times is hard. Very hard.

Where possible, I prefer to keep estimates as private as possible. If a project is internal then a team might get away with using time-boxed development, ie saying “we’ll deliver as fast as we can until time or budget runs out”. Task prioritisation is then important, but precise time estimates are not. Software development for a long-term customer where a strong trust relationship exists might also be possible to run in this way.

However even when this approach is used, feature size estimates are needed to properly set the priority; the customer (internal or external) usually has an idea of the financial benefit of a specific feature, but they need an estimate of the implementation cost in order to decide whether it should be implemented, and if so where it sits in the order. Developers also need a feel for how many subtasks the feature should be broken into before it is appropriate to be taken by a developer for implementation; each developer-level task should only last a few days so that:

  • code reviews are sensible
  • partially-implemented features can be handed over from developer to developer (eg in case of illness)
  • customer review can be gathered and the planned feature adapted if necessary
  • everybody feels a sense of progress

The concept of “story points” is not good, but is the best solution I am aware of; estimating tasks in comparison to other tasks is probably the best we can do. It is certainly better than producing estimates in “person days”.

If the project’s customer demands a “completion date” for a fixed set of features, then there isn’t much that can be done other than fall back to experience and instinct, multiply the result by 4, and then get ready to argue about requirement-changes, code quality, and missed delivery dates as the project approaches whatever date was finally chosen.

Team Velocity

Skipping the assignment of tasks (backlog items) to sprints (as I suggest) doesn’t prevent the calculation of “team velocity”; there is still a set of tasks which have been marked as done during this “sprint time period” and these tasks had “story points” associated with them.

Given the velocity over a recent time-period (eg the last 4 weeks), a general feel can be obtained about when a specific point in the project backlog (requirements list) will be reached. Of course this estimate has some pretty wide error-margins, and these will be ignored by those that the information is reported to.

The question is: are errors in the estimates random or do they have a specific bias (eg usually too high or usually too low)? When no significant bias is present, the number of remaining features is relatively large, and the project is relatively long, then the “time to completion” might be halfway accurate. IMO, none of these conditions is likely to be true (bias is likely, the number of features small due to coarse-grained features, and projects should always be much less than one year) so such time estimates are unlikely to be helpful. On the other hand, the author of the Field Guide claims they are useful, and he certainly does have in-field experience. It is perhaps plausible that over time the mapping from “what this team calls a story point” and “real-world person days of work” might stabilise in some rough sense. I might have to mark myself here as “skeptical but willing to try it”.

Interestingly, the concept of “stories”, “story points” and “team velocity” are not defined at all in Scrum itself; all the original guide mentions is that “various practices exist to forecast progress”. I have more to say on the topic of “forecasting and burn-down” later in the “do not agree” section.

A Dedicated Scrum Master

In most of the projects I’ve been associated with in the past, for every 5-10 developers there has been a project manager who:

  • does administration tasks for the team
  • organises contact with the customer
  • negotiates delivery schedules, requirement changes, etc (at the non-technical level)

In other words, this person has been a little “project owner” and a lot “Scrum master”.

I do like the idea of a strict separation of these roles.

Of the project managers I have worked with, their style falls into one of two categories:

  • hierarchical: “I’m here to tell you what to work on”, or
  • supportive: “I’m here to help you work faster; tell me what you need”

The second was always my favourite; Scrum makes it clear that the Scrum Master should act in the second manner. This is great, but not new.

The Field Guide does strongly recommend having the Scrum Master do nothing else but be Scrum Master. I’m not sure how practical that is in real life; is there really enough work in the role to fill 40 hours per week? The Field Guide does suggest that a person might be Scrum Master in two or three teams concurrently, which I agree is a reasonable solution if the company is large enough. Being a project-manager in my experience requires a distinct skill-set and personality-type, and I can see that a dedicated Scrum Master career (or at least that role for the length of a project) is a good idea if it is administratively possible.

Do Not Agree

Things about Scrum in general, and the Field Guide in particular, that I just don’t like and would try to avoid when possible..

Assigning Tasks to Sprints

IMO, what does not make sense is “assigning tasks to a sprint” in advance, ie planning what tasks are going to be done. This just leads to unnecessary stress and arguments; as long as all team members are doing an honest day’s work then it seems simpler to just use the approach of “take items from the project backlog”.

Interestingly, older versions of the offical Scrum guide mentioned “sprint commitment”, but this concept was removed in 2011. Instead, Scrum now talks about “forecasting” the set of tasks that will be completed in a sprint - better, but I’m still unclear about why even this is helpful. And sadly, task-tracking tools (particularly JIRA) don’t seem to support the concept of “forecasting” well, instead still suggesting “assignment”.

As noted earlier, the Field Guide strongly recommends against changing Scrum conventions. I’m certainly open to reasons why assigning tasks to sprints is a good idea, but I can only see one: being able to report at the start of each sprint what will be delivered at the end of it. Except that this is misleading, because software development is unpredictable. The damage this convention causes to team morale, on the other hand, is clear and obvious.

Forecasting and Burn-down Charts

The topic of burndown charts isn’t addressed in either the official Scrum guide, or in the Scrum Field Guide, but I have seen them used.

If (as suggested above) tasks are not assigned to a sprint, ie a sprint does not consist of “a fixed set of tasks in a fixed time”, but just “as much as possible in a fixed time”, then “per-sprint burndown charts” are irrelevant. And IMO they should die. In a fire. Attempting to put pressure on team members because an arbitrary graph isn’t going to reach zero at an arbitrarily-chosen date is pointless and demoralizing.

Per-project burndown charts might have some slight relevance, showing how long (at current performance, and in the knowledge that feature estimates are only very rough) it will take to reach any arbitrary point in the requirements-priority-list. However it is important to note that only the top items in a backlog will have been fully “refined”, with items further down in the priority list becoming progressively fuzzier and more poorly estimated. Team velocity is also very approximate. And in any agile project, the set of items in the backlog is expected to change. Therefore the utility of such charts is low. If a fixed delivery date for a fixed set of features is required, then it should be accepted that the waterfall method must be used - with the corresponding up-front design phase and appropriate formal requirement-change processes in place.

And of course any forecast will be misinterpreted by management (internal and customer), but there is no known remedy for that.

Strictly Following a Predefined Scrum Process

Agile development is about “people over processes”, self-empowerment, and feedback loops for processes as well as software. Forcing a process onto a team seems contradictory to the Agile principles.

Interestingly, the original Scrum guide simply states that if you adapt the documented process then you shouldn’t call it “Scrum”, which is fair enough. However some sources warn against deviating from “the approved path” at all - something that I would recommend doing wherever it suits.

References and Further Reading

Footnotes

  1. What did you do yesterday? What will you do today? Are there any impediments in your way?