COMP61511 (Fall 2017)

Software Engineering Concepts
In Practice

Week 2

Bijan Parsia & Christos Kotselidis

<bijan.parsia, christos.kotselidis@manchester.ac.uk>
(bug reports welcome!)

Reflecting on Personal Qualities

Mirror

3 of 5

  • We're at Week 3!
    • 3/5s done after today!
    • Where are you?
      • And where are you going?
      • What are the next steps?
  • This is a good time to reflect

Reflection

Reflection is the process of examining one own thoughts, beliefs, experiences, concepts, etc. in order to gain self-knowledge and insight

  • Refection doesn't need to be judgemental
    • You aren't looking for flaws
    • You are trying to understand yourself
    • This includes good things!
      • Maybe they can get better

Reflection Example

  • For CW1 some people handed in
    • a rar archive (e.g., mbassbp2_cw1.rar)
    • an archive with a name like Bijan_Parsia_cw1.zip instead of mbassbp2_cw1.zip
  • This is in spite
    • my mentioning it in lecture
    • it being describe in the assignment
    • there being a preparation script
  • Some people didn't "twig" until I called it out again in clas
    • What should be the takeaway?

Metacognition

Metacognition is thinking about thinking

  • Reflection is one example
  • In general, an important skill
    • For example, when you are stuck on a problem
      • it helps to check whether you are in a rut
      • that is, just trying the same thing over and over
    • Being aware that you got stuck can help you get unstuck!

Cognitive Biases

A cognitive bias is a systematic departure from rationality.

  • We all have them, and lots of them:

Self-Efficacy

Perceived self-efficacy is defined as people's beliefs about their capabilities to produce designated levels of performance that exercise influence over events that affect their lives.

  • Too much self-efficacy
    • is overconfidence
    • is related to Dunning-Krugar
  • Too little self-efficacy
    • paralyses you
    • leads to underachievement

Two Key Biases

  • Bias-Blind Spot
    • "The tendency to see oneself as less biased than other people, or to be able to identify more cognitive biases in others than in oneself."
  • Dunning-Krugar Effect
    • "The tendency for unskilled individuals to overestimate their own ability and the tendency for experts to underestimate their own ability."

Be very careful here!

Goldilocks Self-Efficacy

  • Both too much and too little are bad!
    • Too much == bored
    • Too little == daunted and uninterested

Aim for the sweet spot!

Trajectory!

  • Trajectory over current level
    • Current level is static
    • It informs trajectory
      • But doesn't determine it
  • Reflection!
    • Are you learning quickly or slowly
    • Are you learning how to learn

A Goal

A student who has mastered the [Core Body of Knowledge (CBOK) will be able to develop a modest-sized software system of a few thousand lines of code from scratch, be able to modify a pre-existing large-scale software system exceeding 1,000,000 lines of code, and be able to integrate third-party components that are themselves thousands of lines of code. Development and modification include analysis, design, and verification, and should yield high-quality artefacts, including the final software product.

IEEE & ACM ACM 2009 Software Engineering Curriculum Recommendations.

A student will...

  • be able to develop
    • a modest-sized software system
      • of a few thousand lines of code from scratch,
    • be able to modify a...large-scale software system
      • exceeding 1,000,000 lines of code,
      • and be able to integrate (1000s LOC) third-party components
    • Development and modification include
      • analysis, design, and verification, and
      • should yield high-quality artefacts,
      • including the final software product.

wc?

  • Where does wc.py get us?
    • For a proper clone
      • ≈ hundreds of LOC
    • With extensions
      • maybe 1000s
    • Not counting infrastructure
      • Tests, etc.
  • Does 100s predict 1000s?
    • Good question!

Look around!

  • Modest size software systems?
    • What do they look like?
    • What do they do?
    • Collect some examples!
  • Remember reverse engineering
    • Port from a different language!
    • Rewrite from scratch
  • Create something new!

Test Coverage(s)

Blanket

Coverage

  • Esp. for fine grained tests, generality is a problem
  • We want a set of tests that
    • determines some property
    • at a reasonable level of confidence
  • This typically requires coverage

Coverage and Requirements

  • Consider acceptance testing
    • For a test suite to support acceptance
      • It needs to provide information about all the critical requirements
  • Consider test driven development
    • Where tests drive design
    • What happens without requirements coverage?

Code Coverage

  • A test case (or suite) covers a line of code
    • if the running of the test executes the LOC
  • Code coverage is a minimal sort of completeness
    • See McConnell on "basis" testing
      • Aim for minimal test suite with full code coverage
    • See coverage.py
    • Tricky bit typically involves branches
      • The more branches, the harder to achieve code coverage

Input Coverage

  • Input spaces are (typically) too large to cover directly
    • So we need a sample
    • Pure sample probably inadequate
      • Space too large and uninteresting
    • We want a biased sample
      • E.g., where the bugs are
        • Hence, attention to boundary cases
      • E.g., common inputs
        • That is, what's likely to be seen

Situation/Scenario Coverage

  • Inputs aren't everything
    • Machine configuration
    • History of use
    • Interaction patterns
  • Field testing helps
    • Hence alpha plus narrow and wide beta testing
  • System tests answer to this!

Limits of (Developer) Testing

Developing Test Strategies

  • Have one! However preliminary
    • Ad hoc testing rarely works out well
  • Review it regularly
    • You may need adjustements based on
      • Individual or team psychology
      • Situation
  • The McConnell basic strategy (22.2) is a good default

Developer Test Strategies

McConnell: 22.2 Recommended Approach to Developer Testing

  • "Test for each relevant requirement to make sure that the requirements have been implemented."
  • "Test for each relevant design concern to make sure that the design has been implemented... as early as possible"
  • "Use "basis testing" ...At a minimum, you should test every line of code."
  • "Use a checklist of the kinds of errors you've made on the project to date or have made on previous projects."
  • Design the test cases along with the product.

Some Internal Qualities

Software Quality Landscape

20.1. Characteristics of Software Quality, Code Complete

Thus far we looked at...

  • External
    • Functional
      • Correctness (the functional quality)
    • Non-functional
      • Efficiency (the non-functional quality)
  • Now, some internal
    • Testability
    • For Modification
      • Maintainability

Internal: For Modification

  • Maintainability
    • ease to "change or add capabilities, improve performance, or correct defects"
  • Flexibility
    • ease to modify for new situations ("internal" version of adaptability)
  • Portability
    • ease to modify for new environments
  • Reusability
    • ease to extract parts for use in other systems

Internal: For Comprehension

  • Readability
    • ease of comprehending the source code, esp at the statement level
  • Understandability
    • ease of comprehending the software system as a whole
      • from the synoptic ("bird's eye") view
      • to the myopic ("worm's eye") view

Readability is part of understandability. But you can have readable methods or functions and an impossible to grasp architecture.

Internal: Testability

  • A critical property!
    • Relative to a target quality
      • A system could be
        • highly testable for correctenss
        • lowly testable for efficiency
    • Partly determined by test infrastructure
      • Having great hooks for tests pointless without tests
  • Practically speaking
    • Low testability blocks knowing qualities
    • Test-based evidence is essential

Problem indicators

  • Code Smell
    • "a surface indication that usually corresponds to a deeper problem" (Kent Beck via Martin Fowler)
    • Quick to spot (if you have experience)
    • Doesn't always correspond to a problem
    • Somewhat subjective
    • The "WTF test"
  • Pain Points
    • A part of the system that recurrently causes problems
      • Hard to use
      • Revist often

Testability Smell

   def get_file_list():
      # Get list of arguments from the command line, minus "wc.py"
      args_list = sys.argv[1:]
      ...

or

 def get_max_width():
    max_val_list = []

    for rec in file_log:
        max_val_list.append(rec.get_max_value())

    return max(max_val_list)

Thanks to the brave student who volunteered their code!

Testability Smell FIXED

   def get_file_list(**args**):
      # Get list of arguments from the command line, minus "wc.py"
      args_list = args[1:]
      ...

so we can test by:

 >>> import wc
 >>> get_file_list(['wc.py', '-l', 'filename.txt'])

Thanks to the brave student who volunteered their code!

Testability Smell 2

What about:

#all the module's code        
wc()

Thanks to the brave student who volunteered their code!

Testability Smell 2 FIXED

We want to import the module without running anything!

#all the module's code        
if __name__ = "__main__":
    wc()

Now, import wc doesn't run wc()

Thanks to the brave student who volunteered their code!

Refactoring

  • Notice
    • None of these moves changed functionality
      • Or pretty much any external quality
    • But we improved
      • testability
      • maybe readability and maintainability
      • reusability!
  • We refactored the code

Refactoring

Breadboard complex.jpg

What is refactoring

Refactoring is a transformation of code into sufficiently functionally equivalent code that has "better" internal properties.

"Martin Fowler defines as "a change made to the internal structure of the software to make it easier to understand and cheaper to modify without changing its observable behavior" (Fowler 1999)" — McConnell, 24.2

  • "Sufficiently functionally equivalent"
    • User observable/desirable behaviour is preserved
    • Up to some point

Examples

  • For example, a monolithic script
    • has low testability (only system tests!)
    • replace it with a set of functions
      • e.g., for arg handling, counting, and printing results
    • result: easy to test script
  • For example, hard coded values
    • great for getting going (tech debt!)
    • refactor by making them configurable
      • easier to tweak or eventually make a parameter
    • result: more flexibility!

Code Smells

  • Problem signs (select sample, McConnell 24.2)
    • Code is duplicated.
    • A routine is too long.
    • A loop is too long or too deeply nested.
    • Inheritance hierarchies have to be modified in parallel.
    • A class doesn't do very much.
    • A routine has a poor name.
    • Comments are used to explain difficult code.
    • A program contains code that seems like it might be needed someday.

Known Debt

  • Code smells indicate (potentially) unknown debt
  • But there's explicit, known debt
    • Hacks done for time pressure
    • Incomplete transitions from earlier designs
    • Learning code
    • Technology workarounds
    • Code for discarded features
    • Overengineered code

What Refactoring is Not

  • Code creation
    • Refactoring might enable or facilitate new functionality
    • But you shouldn't add while refactoring
  • Bug fixing!
    • Again, may facilitate
    • Refactoring may reveal or "fix" bugs
  • Performance tuning
    • See above
    • Clean code may be faster...or not!
  • Design changes or rearchitecting
    • Prescursor activity!

Refactoring Preconditions

  • Tests, tests, tests
    • Even when applying "automatic" refactorings
    • Remember, no change in behavior
      • Up to a point at least!
  • For simple refactorings
    • use a tool!
      • e.g., renaming a routine
  • For complex refactorings
    • have a plan!
      • and test!

Technical Debt Revisited

Technical Debt (recall)

Technical debt is "the obligations incurred by a software organization when it chooses an expedient design or construction approach that increases complexity and is more costly in the long term."

  • Typically, lower (internal) quality level
  • It may buy an external quality effect
    • More functionality (correctness)
    • More efficiency
  • It may have negative external effects
  • It may just buy project effects
    • E.g., developer effort

Debt Taxonomy

Intentional Debt

  • Before we discussed unintentional debt
    • We might not know we incurred it!
    • We might not know the interest!
    • Results of poor practice
  • Intentional debt == deliberate, knowingly incurred
    • Needs an identifiable rationale
      • With a scope

If you don't know the scope, it's probably not (fully) intentional

Why go into debt?

  • 2.A Short-Term Debt
    • Tactical reasons
    • 2.A.1 "Big" Debt
      • Significant shortcuts
    • 2.A.2 "Little" (individual) Debt
      • Tiny shortcuts
  • 2.B Long-Term Debt
    • Strategic reasons

Paying Down Debt

  • Debt can become unmanageable
    • Even manageable debt can be costly
  • Paying down debt costs
    • Debt shifts costs to the future
      • (But might add some costs now)
  • Refactoring is the usual approach
    • But also things like adding tests

Do you always have to pay down your debt?

Good debt vs. Bad debt

  • Good debt
    • Has a clear benefit
    • Is worth the cost
    • Is manageable
  • Bad debt
    • Skewed cost/benefit ratio
    • Less or un-manageable
  • Debt can "spoil"
    • Too much good debt can become bad

Technical Debt Case Study

Slides

Project Effects on Product Qualities

Le penseur de la Porte de lEnfer (musée Rodin) (4528252054)

A Key Point (1)

Although it might seem that the best way to develop a high-quality product would be to focus on the product itself, in software quality assurance you also need to focus on the software-development process.
McConnell, 20.2

Poor quality processes raises the the risk of poor quality products

A Key Point (2)

The General Principle of Software Quality is that improving quality reduces development costs. McConnell, 20.5

Counterintuitive principle!

A Key Point Summarised

  1. Poor processes raise the risk of poor products
  2. Improving quality reduces development costs

But...pick two:
Project-triangle

Triangle Encore

Question time!!

  • Does the Good-Fast-Cheap/Pick-2 triangle + the general principle imply that
    1. quality software must take a long time
    2. quality software is impossible
    3. the triangle is false
    4. the general principle is false

Cost of Detection

McConnell, 3.1

Cost of Detection

McConnell, 3.1

Project Qualities per se

  • We've only talked about product
    • Projects have qualities too!
    • E.g.,
      • Being on (or off) budget and schedule
      • Being well run
      • Being well "resourced"
      • Being popular
      • Using a certain methodology (correctly (or no))
  • Since project qualities influence product qualities
    • We have to study them as well!
    • There is an interaction