Using the revised Bloom’s Taxonomy
for the creation of examination questions

 

T

his document is written for individuals who are involved in writing examination questions. The techniques suggested here can be applied to the creation of questions at any level (from access to post-graduate) and any subject – although the examples relate to Computing and Information Systems. The document has two objectives:

 

v     update setters on the revision to Bloom’s taxonomy

v     suggest way of applying the revised taxonomy to the production of exam questions.

 

It is primarily intended for individuals who set questions (‘setters’) for the Scottish Qualifications Authority, but may also be of interest to individuals who simply require a brief introduction to the revised taxonomy for personal purposes.

 

The document borrows from the textbook on which it is based – A Taxonomy for Learning, Teaching and Assessment (ISBN 0-8013-1903-X) – which was written by Lorin W Anderson and David R Krathwohl. It is available from most book stores and online. Readers of this document are highly recommended to purchase the book for a much fuller treatment of each topic. The book focuses on teaching and learning rather than assessment, but this document concentrates on assessment – and the use of the revised taxonomy to produce exam questions.

 

The original textbook – Taxonomy of Educational Objectives:  Book 1 Cognitive Domain by Benjamin Bloom – is still in print (in the US only) and is also available online. Readers are also recommended to read this title since much of the advice is still applicable to the construction of contemporary examination questions.

 

Bobby Elliott

Scottish Qualification Authority

Version 1.1, February 2002

 

Table of contents

1. Background

v     The original taxonomy

2. The revised taxonomy

v     The taxonomy table

Ø      The knowledge dimension

Ø      The cognitive dimension

3. Constructing question papers

v     Creating questions

v     Classifying questions

v     Creating question papers

v     Analysing question papers

v     Balancing question papers

4. Summary

 

1. Background

The original Bloom’s Taxonomy was published in 1956. It was designed to provide a common framework for individuals who were responsible for constructing assessments – particularly written examinations within universities. The original framework provided a means of classifying academic ability into one of six categories:

                                                   

 

 

  1. Knowledge
  2. Comprehension
  3. Application
  4. Analysis
  5. Synthesis
  6. Evaluation.

Figure 1 - Bloom's original Taxonomy

 

Bloom further sub-divided these categories. For example, knowledge was broken down into: knowledge of specific facts, knowledge of ways and means of dealing with specifics, and knowledge of universals and abstractions. Even some of the sub-categories were sub-divided. For example, knowledge of specifics was broken down into: knowledge of terminology, and knowledge of specific facts. In total, Bloom proposed 21 categories of cognitive ability, ranging from ‘knowledge of terminology’ to ‘judgements in terms of external criteria’.

 

The taxonomy is hierarchical in that each category builds on the one underneath. So, for example, analysis depends on comprehension which in turn depends on underpinning knowledge (see figure above). In other words, you can’t analyse something until you comprehend it, and you can’t comprehend something until you know about it.

 

Bloom’s Taxonomy has been the basis of test paper construction for 50 years. A full treatment of his Taxonomy can be found here.

 

Return to table of contents

 

2. Revised Taxonomy

Bloom’s Taxonomy has remained unchanged since 1956. However, a revision of his taxonomy was published in 2001 entitled A Taxonomy For Learning, Teaching & Assessment written by Lorin Anderson and David Krathwohl. The revised taxonomy focuses on teaching and learning, rather than assessment, but can be applied to the construction of test items. The core of the revised taxonomy is a taxonomy table.

 

Return to table of contents

Taxonomy table

The taxonomy table is a two-dimensional system for classifying knowledge and cognitive skills (see Table 1). One dimension classifies knowledge and the other classifies cognition.

 

Knowledge

dimension

Cognitive dimension

1. Remember

2. Understand

3. Apply

4. Analyse

5. Evaluate

6. Create

A. Factual knowledge

 

 

 

 

 

 

B. Conceptual knowledge

 

 

 

 

 

 

C. Procedural knowledge

 

 

 

 

 

 

D. Meta knowledge

 

 

 

 

 

 

Table 1 - Taxonomy table

Taken together, the two dimensions (or classifications) allow you to categorise learning – or assessment. For example, a specific question might assess the recall of factual knowledge – this test item would be placed in cell 1A; another question might assess the evaluation of procedural knowledge – which would be placed in cell 5C.

 

The table has a sense of complexity and hierarchy. Complexity can be measured by how far the test item is placed from the top left-hand corner. In the above example, the recall of factual knowledge (cell 1A) is less complex than the evaluation of procedural knowledge (cell 5C). Of course, this is a crude generalisation and assumes that both questions relate to the same knowledge domain – the recall of factual knowledge relating to nuclear fusion is probably more complex than the application of procedural knowledge of multiplication tables.

 

The table also has a sense of hierarchy. Factual knowledge underpins conceptual knowledge which underpins procedural knowledge which underpins meta knowledge. This is illustrated in Figure 2.

 

Figure 2 - Knowledge dimension as a hierarchy

The above diagram provides an example of the hierarchy of knowledge in the context of computer programming. Knowledge of a programming language’s syntax and semantics (factual knowledge) is required for a student to understand program constructs (conceptual knowledge) which is needed before a student can write a complete program (procedural knowledge) and thereby know his/her limitations as a programmer (meta knowledge).

 

Each of the dimensions in the table will now be explained.

 

Return to table of contents

Knowledge dimension

The knowledge dimension has four categories:

 

  1. factual knowledge
  2. conceptual knowledge
  3. procedural knowledge
  4. meta knowledge.

 

Each category can be further broken down. For example, factual knowledge has two sub-categories – knowledge of terminology and knowledge of specific details (see Table 2).

 

Category

Examples

Factual knowledge: The basic elements candidates must know to be acquainted with a discipline.

Knowledge of terminology.

Technical vocabulary, knowledge of symbols, knowledge of measures, knowledge of acronyms and abbreviations.

Knowledge of specific details.

History of the Internet, descriptions of features of specific WP program, sources of information, knowledge of a programming language.

Conceptual knowledge: The relationships between components or systems.

Knowledge of classifications.

Types of programming language, types of computer system.

Knowledge of systems.

Basic structure of a computer, ISO reference model, knowledge of a specific operating system.

Knowledge of principles and generalisations.

Stored program concept, programming techniques, Moore’s Law.

Knowledge of theories, models and structures.

Program testing strategies, SSADM, program design, JSP.

Procedural knowledge: How to do something, methods of research, criteria for using methods and techniques.

Knowledge of subject-specific skills and algorithms.

Knowledge of how to use an application package, knowledge of how to write a computer program, sorting and searching algorithms.

Knowledge of subject-specific techniques and methods.

Top-down program design, normalisation, structured programming, systematic fault-finding.

Knowledge of criteria for using procedures.

Knowledge of when to use a specific algorithm, knowledge of criteria for selecting a type of applications package.

Meta knowledge: Knowledge of knowledge.

Strategic knowledge.

Knowledge of learning strategies, knowledge of the use of heuristics, knowledge of mind mapping.

Knowledge about cognitive tasks.

Knowledge about the relative complexity of different procedures, exam technique.

Self knowledge.

Awareness of personal strengths and weaknesses, awareness of extent of own knowledge about a particular topic.

Table 2 - Knowledge dimension

Most types of knowledge can be classified under one of these categories (or sub-categories). We will look at how these classifications can be used to assist with the production of exam questions in the next section.

 

Return to table of contents

Cognitive dimension

The cognitive dimension has six categories:

 

  1. remembering
  2. understanding
  3. applying
  4. analysing
  5. evaluating
  6. creating.

 

Cognitive ability

Keywords

Definitions and examples

Remember: Retrieve relevant knowledge from memory.

Recognising

Identify

Match

Matching descriptions with visual representations. For example, identifying the components of a microcomputer system.

Recalling

State

Define

Describe

Retrieving knowledge from long-term memory. For example, stating four characteristics of information or defining the meaning of an acronym.

Understand: Construct meaning from instructions.

Interpreting

Estimate

Convert

Translate

Changing from one form of representation to another. For example, interpreting an advert for computer hardware or converting one unit or measurement to another (e.g. bytes to megabytes).

Exemplifying

Give examples

Illustrate

Demonstrate

Show

Finding a specific example of a concept or principle. For example, relating a specific package’s features to the generic features of a type of package.

Classifying

Arrange

Classify

Categorise

Sort

Assigning something to a specific class or category or re-ordering a list. For example, classifying specific software products by software type (freeware, shareware, commercial etc.).

Summarising

Summarise

Review

Abstracting a general theme or major points. For example, writing a short review of a specific software product.

Inferring

Predict

Deduce

Extrapolate

Drawing a conclusion from presented information. For example, given a number of specific cases, produce rules using an expert system.

Comparing

Compare

Contrast

Evaluate

Map

Detecting correspondences between ideas and/or objects. For example, contrast two programming languages in terms of their data structure facilities.

Explaining

Give reasons Explain

Justify

Constructing a cause-and-effect model of a system. For example, give reasons for the emergence of the Internet.

Apply: Carry out or use a procedure in a given situation.

Executing

Carry out

Perform

Complete

Applying a procedure to a familiar task. For example, carrying out the procedure to install an applications package on a PC.

Implementing

Use

Apply

Implement

Applying a procedure to an unfamiliar task. For example, using applications software to solve a given problem or writing a piece of code to perform a specific task.

Analyse: Break material into its constituent parts and determine how these parts relate to one another and to the overall structure or purpose.

Differentiating

Select

Choose

Discriminate

Identifying similarities and differences, and important and unimportant attributes of objects or systems. For example, choosing a computer system (from two or more provided) for a specific task, or selecting a specific data structure to model a given problem.

Organising

Arrange

Find

Structure

Organise

Determining how elements fit together within a system. For example, constructing a flowchart to represent a given problem description or producing a data flow diagram to model a supplied case study.

Attributing

Assign

Attribute

Deconstruct

Determine a point of view, bias, values or intent. For example, determining the point of view of an author of an essay on the social implications of IT.

Evaluate: Make judgements based on criteria and standards.

Checking

Check

Verify

Confirm

Monitor

Test

Determining inconsistencies or fallacies within a process or product. For example, dry running a given algorithm to check its correctness or testing a program to locate errors.

Critiquing

Evaluate

Comment on

Review

Appraise

Critique

Judge

Critically assess

Detecting the appropriateness of a given procedure for a given problem; measuring a product or process using criteria. For example, judging the appropriateness of two algorithms for a given situation, or evaluating the data security arrangements for a specific scenario.

Create: Put elements together to form a coherent or functional whole; re-organise elements into a new pattern.

Generating

Suggest

Produce

Hypothesise

Imagine

Producing alternative hypotheses based on criteria. For example, given a description of a hardware error, propose possible causes.

Planning

Plan

Design

Set-up

Devising a procedure for accomplishing a task. For example, designing a problem solving routine to diagnose and correct hardware problems or planning the creation of a new software product.

Producing

Produce

Make

Construct

Create

Inventing a product. For example, creating a new piece of software or constructing a Web site.

Table 3 - Cognitive dimension

This table provides setters with a range of cognitive skills from which questions can be constructed. The list of keywords, in particular, provides setters with a large number of active verbs that can be used to construct diverse questions.

 

The use of the taxonomy table to construct test items is explored next.

 

Return to table of contents

 

3. Constructing question papers

The taxonomy table can be used for a number of purposes including:

 

*      assisting with the creation of questions

*      providing a classification system for questions

*      assisting with the construction of question papers

*      analysing question papers

*      balancing questions paper.

 

Each of these uses will now be described.

 

Return to table of contents

Creating questions

The taxonomy table helps in a number of ways in this regard. The knowledge dimension will help you consider the type of knowledge that you are trying to assess (factual, conceptual, procedural or meta). The cognitive dimension (see Table 3) will help you create different types of questions that relate to different cognitive skills.

 

The table, therefore, can be used to generate different types of questions – that is, questions that cover a spread of the knowledge/cognitive domain (rather than a series of questions that repeatedly assess the same thing). So, given a specific topic, and thinking about the different types of knowledge and cognitive skills, it should be possible to come up with a number of diverse questions on that topic. For example, the following questions relate to computer databases.

 

  1. Define a ‘computer database’. (factual knowledge)
  2. Explain three key characteristics of a computer database (conceptual knowledge).
  3. Relate each of these characteristics to a database package with which you are familiar (procedural knowledge).
  4. Compare the database facilities of a dedicated database package to those of a general purpose spreadsheet package. (procedural knowledge).
  5. Suggest criteria that could be used to help users decide whether to use a database or spreadsheet package for a specific task. (procedural knowledge).

 

Return to table of contents

Classifying questions

Once a bank of questions has been created, the taxonomy table provides a means of categorising the questions. For example, the sample questions above could be classified as follows:

 

1.      Define a ‘computer database’. (remembering factual knowledge)

2.      Explain three key characteristics of a computer database. (understanding conceptual knowledge)

3.      Relate each of these characteristics to a database package with which you are familiar (applying procedural knowledge).

4.      Compare the database facilities of a dedicated database package to those of a general purpose spreadsheet package. (analysing procedural knowledge)

5.      Suggest criteria that could be used to help users decide whether to use a database or spreadsheet package for a specific task. (evaluating procedural knowledge)

 

The questions could be mapped onto the taxonomy table as illustrated in the table below.

 

Knowledge

dimension

Cognitive dimension

1. Remember

2. Understand

3. Apply

4. Analyse

5. Evaluate

6. Create

A. Factual knowledge

Question 1

 

 

 

 

 

B. Conceptual knowledge

 

Question 2

 

 

 

 

C. Procedural knowledge

 

 

Question 3

Question 4

Question 5

 

D. Meta knowledge

 

 

 

 

 

 

Table 4 - Mapping questions

Mapping the questions onto the taxonomy table gives an indication of the relative complexity of the questions. The mapping also confirms that the questions are diverse since they occupy different cells in the table and therefore assess different cognitive abilities.

 

Return to table of contents

Constructing question papers

Once a question bank has been produced and each question has been classified, the Principal Assessor can construct a question paper using the taxonomy table to check the balance and complexity of the planned paper. For example, Table 4 tells you two things about the sample questions:

 

*      they are different (in that the cover different knowledge and cognitive processes)

*      they are difficult (they cover quite demanding cognitive processes).

 

Different questions will occupy different cells in the taxonomy table; similar questions will occupy the same cells in the table. Simple questions will occupy cells close to the top left-hand corner; complex questions will be further away from the top left-hand corner. So the PA can use the table to select a range of questions that assess different types of knowledge and different cognitive processes. In general, you would expect lower level papers to have more questions towards the top left-hand corner of the table and higher level papers to have questions towards the middle and bottom right-hand corner. But every paper – irrespective of its level – should map onto a range of cells (rather than repeatedly assessing the same type of knowledge or cognitive process). This provides the necessary discrimination to allow candidates to perform at varying levels and receive different grades.

 

Return to table of contents

Analysing question papers

Once a paper has been constructed, the taxonomy table can be used to analyse it. This could be done to check the balance of a paper – in other words, to check if different types of knowledge have been examined and various cognitive skills assessed. The following tables illustrate the mapping of two past papers. The first table illustrates the mapping of an Intermediate 1 Computing Studies paper and the second table maps a Higher Computing paper (both papers were operational papers used in diet 2000). The numbers in the cells represent the number of questions in each paper that were categorised according the taxonomy table. For example, three questions in the Intermediate paper related to ‘remembering factual knowledge’ (see Table 5).

 

Knowledge

dimension

Cognitive dimension

1. Remember

2. Understand

3. Apply

4. Analyse

5. Evaluate

6. Create

A. Factual knowledge

3

2

1

1

 

 

B. Conceptual knowledge

1

 

 

 

 

 

C. Procedural knowledge

 

 

 

 

 

 

D. Meta knowledge

 

 

 

 

 

 

Table 5 - Mapping of Intermediate 1 paper

The Intermediate 1 paper, as expected, focussed on factual knowledge. In fact, seven of the eight questions in the paper related to this type of knowledge. A wider variety of cognitive skills were assessed – ranging from ‘remember’ to ‘analyse’.

 

Knowledge

dimension

Cognitive dimension

1. Remember

2. Understand

3. Apply

4. Analyse

5. Evaluate

6. Create

A. Factual knowledge

4

2

 

1

 

 

B. Conceptual knowledge

1

3

2

 

1

 

C. Procedural knowledge

 

 

5

 

 

 

D. Meta knowledge

 

 

 

 

 

 

Table 6 - Mapping of Higher paper

The Higher paper assessed higher order knowledge and skills. Seven questions related to factual knowledge but seven related to conceptual knowledge and five related to procedural knowledge. The focus on conceptual knowledge at this level is expected. The Higher paper also exhibits wider coverage of the knowledge/cognitive domain, with eight of the 24 cells occupied by at least one question (compared with five occupied cells in Table 5).

 

So the taxonomy table gives us an indication of coverage and complexity, and this information can be used by Principal Assessors to create question papers which provide sufficient discrimination to grade candidates.

 

Return to table of contents

Balancing a paper

Once a question paper has been analysed, the Principal Assessor can use the mapping to check the balance of the paper – that is, its coverage and complexity. The coverage is easily seen by the number of cells that are occupied once the paper has been mapped. A good question paper will generally have a reasonable coverage of the knowledge/cognitive domain. The complexity of a paper can be measured by the distance of each question from the top left-hand corner of the table – the further from the corner the more complex the question.

 

Applying these criteria to the sample papers, it can be seen that the Intermediate 1 paper (Table 5) had low coverage and low complexity; the Higher paper (Table 6) had medium coverage and medium complexity. This is much as you would expect. However, a visual check of each mapping indicates potential improvements to each paper. The Intermediate paper could do with a question on ‘understanding conceptual knowledge’ (cell 2B) – perhaps instead of the question in cell 4A. This would not only remove the assessment of a difficult cognitive skill (analysis) but also increase the proportion of the paper dealing with conceptual knowledge (which is probably under-assessed - even at this level). The distribution of questions in the Higher paper could be improved. There are gaps (no question assesses the application of factual knowledge) and overlap (five questions assess the application of procedural knowledge). A rewording of some of the questions would correct this imbalance. While the coverage of this paper is reasonable, the ‘perfect’ paper would be more widely distributed across the knowledge/cognitive domain.

 

The key point is that the taxonomy table gives the Principal Assessor a way of analysing a question paper and correcting its balance. But it’s not a science. Placing particular questions in a specific cell in the table is a bit arbitrary – many questions could be placed in one of a number of possible cells. However, by mapping an entire question paper onto the taxonomy table you get a feel for the general coverage and level of complexity of the paper.

 

Return to table of contents

 

4. Summary

The taxonomy table doesn’t write question for you nor does it guarantee good question papers. It doesn’t even guarantee a balanced paper (although it makes this more likely). What it does is provide a common framework for thinking about questions and question papers, and a common vocabulary for discussing their contents. The use of the table should help setters produce a wide range of diverse, discriminating questions that combine into question papers which are appropriate to their level and consistently demanding from year to year.

 

Return to table of contents