Return to ATSC Home page

needs & Task Analysis

The analysis phase of the ADDIE process is used to determine whether there is a need for creating learning products, or whether a non-training solution such as a job aid is more appropriate. During the analysis phase the instructional problem is identified; the target audience characteristics are determined; desired outcomes are established; existing knowledge of the students is reviewed; content, context and media are considered; and an evaluation plan is developed.

Why analysis?

The nature of the CAPDL contract vehicle, firm fixed price, puts a premium on knowing what you want to produce when the content is nominated. For this reason, front-end analysis and due diligence are of the utmost importance.”
Source: TRADOC Pamphlet 350-70-12 The Army Distributed Learning (DL) Guide

I. Needs analysis

Purpose

Often the needs analysis is not done and the need for instruction may not be clearly understood.

Many training developers confuse task and needs analysis and think they are the same. However, there are two basic differences between task analysis and needs analysis: purpose or function and sequence. Needs analysis (also known as front end analysis) is triggered by a change in purpose of function of an activity and the analysis will determine if new or additional training is needed. According to TRADOC Regulation 350-70 PDF Format paragraph 4-6.3.f., changes to learning products are generated by a needs analysis; changes in threat and doctrine (current and emerging), organizations and missions, materiel, leader development, or occupational specialty structure; the need to eliminate leader competency and/or performance deficiencies; learning policy or methodology; and/or by efforts to improve learning efficiency and effectiveness. The product of a needs analysis is a statement of learning goals or objectives expressed as tasks. The tasks are associated with each learning objective and can be expressed as either individual or group (unit or mission). In short, a needs analysis identifies the present state, the desired outcomes, and the gap between them.

The list of tasks produced in the needs analysis (this is the inventory of tasks used in a Critical Task Selection Board) is then analyzed to determine what must be learned to achieve stated goals. This statement of learning goals is used to determine what actually gets taught or trained. The task analysis describes the learning situation for the purpose of making instructional design decisions. The task analysis organizes the tasks and task components and provides a sequence for training.

Needs analysis first determines that an instructional need exists; task analysis analyzes that need for the purpose of developing the instruction and assessment. In cases where a needs analysis is not conducted, when training goals are mandated or already established, then the analysis process usually begins with task analysis.’
(Jonassen, Tessmer, & Hannum, 1999, p.8)

Romiszowski (1981)1 described task analysis procedures as different levels depending on the level of instructional design. Learning objectives are defined at the course level and refined and sequenced at the lesson level. Expected behaviors are classified at the learning event level and finally individual steps in the tasks are identified at the learning step level.

II. Task Analysis

Purpose

Too often task analysis is not performed and the design of instruction is at risk of not identifying true learning outcomes.

There are many ways to conduct a task analysis and many reasons for conducting one. The basic reason is to determine what should be taught. For the instructional designer, a task analysis can provide many other answers: the goals and objectives of learning, the components of a job, skills, or knowledge required to perform a job; how the learner should think before, during, and after learning occurs; knowledge states (declarative, structural, and procedural) sequence, pace, instructional strategies that foster learning; media, and assessments to support learning. Gagne´ (1963)2 described task analysis as the most valuable tool the instructional designer has for effective learning, and it remains so today.

There are many methods for performing task analysis, this is not a one size fits all type of review. The field of instructional design has its roots in the systems approach to training which the Army followed beginning in the 1970s and lasting until adoption of the ADDIE process. The work of Gagne´ (1962) influenced training development which followed a model of job analysis which broke down the tasks of a position into the individual steps and sequence.  Other analysis designs follow human to computer interaction which uses task analysis extensively to detail the needs of the users as they interact with the computer; cognitive task analysis which is used to detail problem solving activities such as critical thinking, and fault diagnosis. 

  A. Content Analysis

Content analysis is a term used to describe the actual content of the instruction as opposed to the sequence, structure or other mechanics of presentation. It is best left to the content subject matter experts (SME) who can review the results of the needs analysis and make their determination of what needs to be taught, to what standard, and under specific conditions. The SME can determine a training gap but may not be the right person or have the right skills to determine the instructional strategy to fill that gap.

Once the determination is complete and a training and education gap is defined, an instructional designer will review the tasks that make up the present state, the desired outcomes (future state). Part of this process includes classifying the knowledge, skills, and abilities to begin the process of designing the instruction.

  B. Task Description

Task description is part of a systems approach to task analysis in which defined training uses systems analysis as the foundation. The task description would specify the interface between people and systems in precise terms and stated exactly what people must do in their jobs. This approach allowed a system to run efficiently and effectively. Later, the combination of equipment descriptions and exact task description provided the instructional designer with the information they needed to create effective training. Instructional designers then identified instructional goals and instructional content to support those goals (Jonassen et al., 1999)3.

  C. Learning Outcomes Taxonomy

Taxonomy is used to create a hierarchy or organization of things to make it easier to catalog items. Learning outcome taxonomy is used to classify different types of learned capabilities such as the type of performance you expect a learner to exhibit after receiving training.  Learning objectives are different from learning outcomes in that an objective is a specific statement about the performance while the outcome tells us about the knowledge that performance demonstrates.  Example: identify food borne pathogens is a learning objective; the learning outcome is knowledge about food borne pathogens. Understanding the difference between learning objectives and outcomes will help the designer in the development of learning strategies.

There are different learning taxonomies. The most commonly used is Bloom's which was developed in 1956 for the purpose of writing learning objectives. There are many others that should be considered when defining learning objectives, Merrill’s, Gagne’s, and Jonassen & Tessmer to name a few. A little research will assist in writing a better learning objective which leads to the desired outcomes.
At the conclusion of the analysis process, the first two elements are known:

  • Task level: What are you asking the learner to do as a result of this objective? Example: Identify safe food handling.
  • Learning objective: What is the learner expected to know or do at the conclusion of the instruction (learned capabilities)? Example: Know the steps that ensure food safety.

  D. Learning Content Object Type

Clark(1998)4 described digital learning objects as small pieces of content that support reuse and repurpose. These small objects can then be used to generate new courses or job and performance aids. She defined two types of digital learning objects:

  1. information or knowledge objects, and
  2. instructional objects

Learning Objects

Learning objects such as text, graphics, or sound are small pieces of information or knowledge that can be collected together to make lessons or learning events. The delivery of these content types is seen in the presentation of: facts, concepts, processes, procedures, or principles.

Instructional Objects

Instructional objects are learning objectives, practice exercises, and feedback which are usually presented in visual or aural modes and are generally presented as lessons, assessments, and practice exercises.

  • Content type: What type of content is presented? Example: Facts about food safety.

  E. Instructional Design

Instructional Strategy

Instructional strategy refers to the most appropriate method of instruction for a particular learning objective. The range of methods is broad for resident training and education but much narrower for blended and web based training and education. Selection of a method of instruction relates methods to learning outcomes.

  • Instructional Strategy: What instructional strategy is most appropriate for the learning objective? Example: For most sequential procedural steps, a program for drill and practice works best.

    1. Sensory Modality

Sensory Modality asks what senses are targeted in the learning event. Mayer & Moreno (1999)5 replaced visual text with audio in multimedia instruction to decrease working memory load and improve learning. This is known as the modality effect. The result was an increase in learning as long as the time period was short and the learner had no control over pacing. If the audio is too long and the learner has the ability to move on they probably will and the effect on learning is lost. For optimum learning and reduced cognitive load keep the narration short and to the point, control pacing until the audio is finished.

Visual stimulus is used extensively in multimedia instruction but there is little evidence that it is superior to text or auditory stimulus. Combing one or two stimuli can actually overload the learner and decrease learning.

Selection of the right stimulus should support the learning event.

  • Sensory modality: What senses are targeted in the instruction? Example: Cognitive senses; read and recall using text and audio.

    2. Locus of Control

Locus of control refers to control of navigation and pacing of the learning event. Program controls mean that the software itself is controlling the learner’s movement through the courseware. This occurs when the learner clicks certain instructional cues and the courseware reveals more information or opens a new window. The learner is not clicking the NEXT button.

Learner controls refers to all navigation, forward, backward, skip ahead using a menu, and pacing clicking through and stopping where you want, is controlled by the learner alone. Learner control has been found to enhance satisfaction with multimedia instruction.

  • Locus of control: Is the program controlling navigation and pacing or is the learner? Example: The program should control pacing of the content and navigation from page to page.

    3. Media

Media should support the learning outcome. Overloading the screen with media that doesn't add instructional value can reduce learning.

  • Media: What media best supports the learning outcome? Example: Visual depiction of the steps in text, then a picture of what right looks like.

    4. Interactivity Level

Interactivity level defines the type and quality of interaction the student will have with the content itself.

  • Interactivity level: What level of interactivity best matches the level of learning? Example: reading and recall are passive activities, no branching, should be a level one interactivity.

    5. Assessments

Assessments are not the end of learning; they are the acknowledgement of learning. It is important to correctly align what was taught with what is assessed but it is just important to match the type of assessment.

  • Assessment: Does the assessment match the level of learning? Example: procedural steps should not only be recalled but sequence should be part of the assessment; drag and drop to enforce order as well as step.

Putting It All Together

putting it all together
Click to View a Larger Image.

Worked Sample

worked example
Click to View a Larger Worked Example.

References:

1Romiszowski, A. J. (1981). Designing instructional systems. London: Kogan Page.

2Gagne, R. M. (1962) Military training and principles of learning. American Psychologist 17, 83-91

3 Jonassen, Tessmer, & Hannum, (1999). Task analysis methods for instructional design. Lawrence Erlbaum Associates, NJ: Mahwah

4 Clark, R. C. (1998). Recycling knowledge with learning objects. Training & Development, October 1998, 60-61

5 Moreno, R., & Mayer, R. E. (1999). Cognitive principles of multimedia learning. Journal of Educational Psychology, 91, 358-368.

Page Last Updated: 10/29/2015 11:00:40 AM