EBPAssessmentProforma21 xAcademic_writing__APA_style2.pptxAssessment1questions2 xLecture_about_the_therioes_2.pptxActiveSupervision Part2-WhyEvidencebasepracticeisimportant_1 ActiveSupervisionPrecorrectionandExplicitTimingAHighSchoolCaseStudyonClassroomBehavior Criteria_for_EBP2 Useofspecificpraise OpportunitiestoRespond Part2-whattheycandotodiscoverwhatpracticeshaveanevidencebase1 SpecificPraise EBP-22
Teacher careers Create a behavior management guide for beginning teachers at either the primary or high school level. Choose three evidence-based practices from the list below. Research to find a journal article that supports the use of each one (Books are not accepted, must be journaled articles). These articles should be current (within the past seven years) and from a peer-reviewed journal. Use the information in the article to complete the matrix on the following page. Please remember to paraphrase the information into your own words, to avoid plagiarism! Begin the guide with an introduction that describes the importance of having a solid knowledge base of research-based behavior management strategies and interventions. The practices should be well-aligned with each other and a particular philosophy (refers to behavioral, cognitive-behavioral, Psychoeducational, etc, lecture slides about this philosophy would be uploaded for you), and be appropriate for the stage of the population you will be teaching. The practices should be well-aligned with each other and a particular philosophy and be appropriate for the stage of the population you will be teaching. The evidence you provide should be from journal articles from the last seven years. These should be referenced using APA style. Articles must be sourced from high-quality peer-reviewed journals. (Hide)
EDST5133
EVIDENCE-BASED PRACTICE ASSESSMENT
Note: YOU MUST TYPE DIRECTLY INTO THIS DOCUMENT! DO NOT SAVE THE MATRICES AS PDFs AND INSERT INTO A WORD DOC!
Name:
Tutorial time:
Part 1 Directions: Choose three evidence-based practices from the list below. Research to find a journal article that supports the use of each one. These articles should be current (within the past ten years) and from a peer-reviewed journal. Use the information in the article to complete the matrix on the following page. Please remember to paraphrase the information into your own words, to avoid plagiarism!
2000words
1. Physical layout of the classroom is designed to be effective
2. Predictable classroom routines
3. Class expectations
4. Active supervision
5. Opportunities to respond
6. Specific praise
7. Prompts and/or precorrections
8. Error corrections
9. Social skills training
10. Positive reinforcement systems/token economies
EBP 1 |
||
Reference (APA Style) |
||
Name and description of program or practice (Define the practice) |
||
Participants and setting |
||
Methodology |
||
Results |
||
Ways the EBP can be used to support student behaviour in your classroom |
||
Are there any special materials or supports that are necessary to effectively implement the practice? |
||
How can teachers track student progress and assess the effectiveness of the EBP? |
EBP 2 |
|
Name and description of program or practice (Define the practice) |
EBP 3 |
Part 2: Discuss the importance of teachers using evidence-based practices, and what they can do to discover what practices have an evidence base (base your answers on the literature): 1000words
References
Academic writing & APA style
For Dummies
Why is it even important?
(a.k.a. Why do I need to know this stuff?)
It signals to your reader that you understand the requirements of the discipline, establishing your discursive legitimacy
Put simply, it shows you know ‘the rules of the game’; you know your stuff and mean business
For exampal, wud u guize c me as a legit smart person if I wrote liek this fam?
How would you view a paper written with this font?
It shows your attention to detail and that your work is carefully considered
Imagine you are reviewing a manuscript looking to be published in a journal, and you find they have written 0.0062 instead of 0.00062 in their calculations, meaning their experimental bridge will collapse, or their experimental medication will actually poison people – how would you look at the rest of their manuscript?
What does good writing APA writing look like?
(The easy stuff)
It is correctly formatted
Size 12 font (usually Times New Roman)
2.54 cm (1 in.) margins on all sides
Double spaced
Text aligned left
Indent for each new paragraph
It considers the effect of layout on readability
Headings if useful
Heading levels clear
Uses bullet points, numbered lists etc. if useful
All in-text citations match up with reference list (names, year, all
information is correct)
The reference list complete and correctly formatted
Overall, the emphasis is on clarity and accuracy
What does good writing APA writing look like?
(The hard stuff)
It is rigorously supported
Imagine an unsympathetic, cantankerous reader who is trying to disagree with you at every turn -> your job is to show them that your ideas are supported by the literature, and that your conclusions are reasonable and well argued based on the evidence you present
Do not rely on personal experience (e.g., well I ‘just know’ that x is a good instructional practice -> then why can’t your reader ‘just know’ that you’re wrong?)
It uses language extremely precisely
Don’t say “Many studies show that …” and then only give one citation -> your reader now expects to see ‘many studies’
Don’t use words carelessly; “X is a useful strategy for all teachers” -> your reader now expects you to argue that your claim applies to EVERY teacher; “this proves that” -> your reader now expects to see incontrovertible evidence of your claim
Even clumsy use of pronouns can be troublesome, like “We can therefore see…” -> who is we? The author and the reader? A team of authors? Someone else?
What does good writing APA writing look like?
(The hard stuff)
It is current
Good writing draws on the most recent literature available
By the time a paper is finally published, the research can already be 2-3 years old
It does not rely on excessive use of quotations, or use one citation over and over again
Using too many quotations detracts from your own voice, your reader will think ‘why don’t I just go and read the original work?’
Overuse of a single source shows that you can’t engage thoroughly with the literature
So how do I write like this?
It is rigorously supported
Be critical of your own work, imagine your reader asking ‘how do you know this?’
Draw extensively on the literature
Organise the research into themes (if possible)
It uses language extremely precisely
Be critical of your own work, make sure every word has a purpose
Don’t just write something because it sounds good
Hedge your writing where necessary
The literature suggests…
Some students may…
Strategy x could…
Be careful of attitudinal lexis
e.g. “This is a great strategy…”, “This is a fantastic tool for…” vs “Strategy X has a strong evidence base”, “The research suggests that using strategy x can lead to…”
So how do I write like this?
It is current
Use the most recent sources you can find
Older sources are still acceptable (especially for major works/ seminal papers), but your paper should mainly be using the most recent research available
It does not rely on excessive use of quotations, or use one citation over and over again
Paraphrase as much as possible -> this shows your interaction with and understanding of the themes of the research
Try to organise writing theme by theme if possible
For example “Sample text about idea 1 (Citation 1; Citation 2). Sample text about connection between idea 1 and idea 2 (Citation 3; Citation 4; Citation 5). Citation (6) also argues idea 3.
Plagiarism, paraphrasing, and referencing
What is plagiarism?
A common answer is ‘copying someone else’s work’ or ‘taking someone’s work as your own’ –> this can leave some people unsure about including the work/ ideas of others in their own writing (e.g., am I just stealing?)
This answer leaves many confused about ‘self-plagiarism’ -> how can I ‘steal’ my own work? Don’t I own it?
A more accurate way to describe plagiarism is ‘claiming unoriginal work as original’
If you don’t acknowledge the source of your ideas, your reader will assume they are your original ideas -> claiming unoriginal work as original
This is why ‘self-plagiarism’ is an issue; an author cannot just republish the same journal article, twice a year, every year, for 50 years, and say they have 100 publications -> they are claiming unoriginal work (the 99 times it was republished as new) as original
Plagiarism, paraphrasing, and referencing
So what do I when writing?
Acknowledge everything that is not your own work/ idea – this may be a LOT of your paper, that is ok!
Good research is an attempt to add a tiny piece of understanding to the collective knowledge of the discipline – this is why academic writing relies so heavily on citations and has massive reference lists; authors argue ‘here is what we know, here is an extra 0.0001% that I believe we can add’
Be careful with paraphrasing
Do NOT take sentences and try to change words around, replace with synonyms etc. -> instead draw out the implications of the paper overall and then state these in your own words
Do not be afraid to draw heavily on the literature – it represents as close as you can get to what we ‘know’, why wouldn’t you want to use this as much as possible when your job is to demonstrate your mastery of the knowledge?
Understand the purpose of assessments -> (in general) you need to demonstrate your engagement with the literature, and the ability to draw out the implications for your own practice
Worked examples – Text 1
Teaching is one of the most difficult but rewarding professions. In order to be an effective teacher, it is really important to have empirically-supported evidence based practices that are proven to be effective by research. Using evidence based classroom management practices is an excellent way to help students learn because it can keep them engaged and on task. This in turn leads to improved academic achievement.
Teaching is one of the most difficult but rewarding professions – Weak opening – difficult compared to what? Rewarding to whom? How can you claim this without evidence?. In order to be an effective teacher, it is really important to have empirically-supported evidence based practices that are proven to be effective by research – Clunky and repetitive writing. Using evidence based classroom management practices is an excellent way to help students learn because it can keep them engaged and on task – This is a bit better, the author is making a more reasonable claim, but without any evidence their argument falls flat. This in turn leads to improved academic achievement – The author’s rhetorical organisation is clear (i.e., good practices -> student engagement -> improved performance), but they have not argued convincingly for this position.
Worked examples – Text 2
An “evidence-based practice” is one which is “supported with methodologically sound, peer-reviewed studies” (Smith, 2015, p. 15). Using evidence-based practices has “been shown to improve on-task behaviour by up to 62%” (Brown, J. T., 2012, p. 8). We can see Beginning teachers can benefit from “integrating these practices” into their own classrooms, allowing for the “effective delivery” of content and improved academic outcomes. (Jones, pp. 247)
An “evidence-based practice” is one which is “supported with methodologically sound, peer-reviewed studies” (Smith, 2015, p. 15) – Clear definition, APA style is correct; but unnecessary quoting, this could have easily been paraphrased. Using evidence-based practices has “been shown to improve on-task behaviour by up to 62%” (Brown, J. T., 2012, p. 8) – The author is providing clear evidence for the claims, so they are beginning to build their argument; but an APA error and even more quotes. We can see beginning teachers can benefit from “integrating these practices” into their own classrooms, allowing for the “effective delivery” of content and improved academic outcomes. (Jones, pp. 247) – Compared to the previous example, the author’s conclusion feels earned. They have drawn out the implications from different pieces of research, and made a reasonable argument; however they have quoted unnecessarily and have issues with APA style.
Worked examples – Text 3
Evidence-based practices (EBPs) refer to classroom strategies and/ or interventions that are thoroughly supported by sound research (Smith, 2015). Using EBPs may allow beginning teachers to feel more confident within the classrooms by providing them with tools to maximise desirable behaviour, and decrease undesirable behaviour (Brown, 2012; Jones, 2011).
Evidence-based practices (EBPs) refer to classroom strategies and/ or interventions that are thoroughly supported by sound research (Smith, 2015) – The author has used the same source as text 2, but paraphrased it, allowing them to establish their own voice; APA style is correct. Using EBPs may allow beginning teachers to feel more confident within the classrooms by providing them with tools to maximise desirable behaviour, and decrease undesirable behaviour (Brown, 2012; Jones, 2011). – The author is able to show their engagement with the literature by drawing on multiple sources and paraphrasing the implications; they hedge their writing to ensure their claim is reasonable, and use APA correctly. Their economy of style lets them introduce more information in fewer words (cf. text 1, which has 4 sentences, compared to only 2 here)
Assessment 1 questions
1. Best way to find articles?
a. UNSW library, start with a lot of specific search terms and then become more general
b. Textbooks – look at their reference list
c. Search for journals – look on Moodle, week 1, hyperlink to “list of journals”
i. UNSW library -> eJournals -> search for title or choose letter
2. Structure?
a. Part 1 – complete template (hyperlink on moodle “complete the matrix”)
b. Part 2 – prose text, essay
3. Dividing word count
a. Up to you, but recommend ~ 2, 000 for part 1; ~1, 000 part 2
b. This does not necessarily align with marking
4. Filling out the matrix
a. Total of 3 tables to be completed, 1 article per table
5. Defining methodology?
a. How the research was conducted -> what procedure did the researchers follow to find the results
6. How many EBPs?
a. 3 articles = 3 different practices
7. Combining references?
a. In part 1 – no. Each table should be self-contained and not refer to other sources [except you might draw on other articles for the section “ways it can be applied in your classroom”]
b. In part 2 – yes, you can combine sources
8. Structure/ approach for part 2
a. For first question – make claims and then support them
b. For second question – providing suggestions/ strategies + describing how to appraise the literature/ research
9. Do all five sources have to be journal articles?
a. No
. For part 1, ALL THREE must be journal articles, but the other 2 you use can be other sources (e.g., textbook, books, professional literature)
10. What reference style?
a. APA – google ‘APA Purdue’ for a good guide
11. Will this be on Moodle?
a. Yes
12. How do I know if it’s a good article (for the purpose of this assessment)?
a. Clearly explained so you can complete template
b. Clear implications for behaviour/ classroom management
13. Do the journals/ article contexts have to be specific to Australia/ or my method?
a. No. But you should be able to talk about how you will apply to your context
14. Where is the marking criteria?
a. In the course outline, pp. 10-11
15. When to use page numbers for in-text references?
a. Only when quoting, otherwise just name and year
i. E.g. example text example text (Author, 2015)
ii. Example text “example text” (Author, 2015, p. 15)
16. How to order authors when providing multiple citations?
a. Alphabetical order, NOT chronological. The authors should appear in the same order as the reference list
i. E.g. Example text (Brown, 2015; Jones, 2011)
b. If more than one author per paper, alphabetise by first author’s surname
17. What do I do with stats, d = etc.?
a. You don’t need to give the specific numbers (but you can), focus on the implications/ themes
18. Do references count in the word count?
a. No
19. How strict is word count?
a. In general +/- 10% -> ~ 2,700-3,300
20. What about arguments for conflicting evidence bases etc.?
a. For part 1, avoid. Find three articles that had clear results
b. For part 2, you can consider. You might talk about how it is important to appraise the literature when looking at evidence, talk about generalisability, etc.
i. What do you mean by generalisability?
1. How applicable are the findings/ implications to other contexts?
21. Can we take readings from the course readings?
a. Yes
22. Is EndNote available?
a. Yes, it is
23. What does five citations mean?
a. Five DIFFERENT references, not 1 reference cited 5 times
24. Clarifying total citations
a. Part 1 = 3 TOTAL references, all journal articles
b. Part 2 = AT LEAST 2 references, but probably around 5~6 TOTAL in part 2
25. Time management
a. Check the abstract first, then discussion/ conclusion
b. You are looking for implications, so take out what is necessary
26. Is there a limit on citations?
a. No, you can have as many as needed
27. What if a paper had multiple interventions/ EBPs?
a. That is fine, choose your EBP and focus on that
b. You might talk about how it can be combined in your ‘ways it can be applied in your classroom section’
28. Where should the introduction come?
a. Before the tables, first element
29. What should the intro include?
a. Overview of the paper and introduction to the topic (you might take 1-2 sources from part 2). You might also say what your classroom will be (e.g. these EBPs will be applied to a year 10 music classroom). Max 200 words
30. What about meta-analyses?
a. For part 1, avoid, make it easier for both of us and choose a single-study. You can use meta-analyses to find single-studies from their reference list
b. For part 2, these can be useful to give an overview of the literature
31. Do you need to analyse the ‘applicability’ of the research in the ‘how it will be applied in my classroom’ section?
a. No, think about this, but only write about what your practice will look like
32. What about if there are no materials/ supports?
a. Just write something like “no materials necessary”.
33. Do ‘experimental design’/ ‘procedures’ mean the same thing as ‘method’.
a. Yes
34. Do you need to reference inside the matrices?
a. No, unless quoting. If you are quoting, just add a page number
35. Where to submit?
a. On moodle, Turnitin submission link
36. Submit as Word doc or PDF?
a. Word doc
37. Submission time?
a. 5pm
38. Isn’t the introduction the same as the beginning of part 2?
a. Yes, your introduction will be very similar to your opening of part 2. If the sentences/ sources are the same/ similar, that is fine.
39. Detail for methodology/ participants/ settings
a. A few sentences e.g. The researchers went into classroom x, measured y, did intervention z and got result 123.
40. Can we cite the textbooks?
a. Yes, for part 2
41. How specific should the description of my practice be?
a. Very explicit, describe how the EBP will look in your classroom. E.g. In my classroom/ In a year 12 maths class…
42. Do the matrices have to have APA formatting? E.g. double spaced, new paragraph indented
a. No, just whatever is neat/ readable
43. Secondary citations?
a. Best case scenario – find the original source, read it, and cite that
b. If not, acknowledge that it is a secondary citation
i. Example text (Author 1, as cited in Author 2, 2015). ONLY author 2, the one you read, will appear in your reference list
LECTURE 1
CLASSROOM MANAGEMENT THEORY
Intro
Policy on mobile tech
Course Outline
Location of material
Your text + other readings
Importance of attending lectures
Q & A
Expectations
Expectations for Be Respectful Be Responsible
Lecture Sit Quietly and Listen
Save conversations with others until after the lecture
Pass sign in sheet along to next person
Silence all electronic devices Attend
Arrive On Time
Listen/Take notes
Use electronic devices for notetaking only
Tutorial Sit quietly and listen when others are talking
Allow others their opinions Attend
Arrive On Time
Participate
On your own 1. Check for information in the course outline or on the Moodle site before emailing the lecturer or tutor
Complete readings
Complete assessments and submit them on time
Learning Goals
You will:
Recognise the importance of classroom management theory in the development of classroom management plans
Identify and critique a diversity of classroom management theories
Understand options for choosing and developing a personal theoretical approach to classroom management
Understand several classroom management theories which have potential congruence with your learning and teaching philosophy
Has Classroom Management Changed?
Principles of Classroom Management
(Brady & Scully, 2005)
Engage students: planning teaching and learning strategies
Establish rules
Develop the culture
Select appropriate strategies
Promote self-discipline
Practice consistency
Why Learn the Theory???
Making sense of student behaviour
The ability to draw strategically on the wide pool of theory about student behaviour and classroom management its critical to engaging in evidence-based practice.
Management Theory Groups
Psychoeducational Theories: student misbehaviour is an attempt by students to meet their needs. Teachers should create learning environments that meet these needs.
Cognitive Behavioural Theories: advocate the proactive involvement of students in negotiating improved behaviours.
Behavioural Theories: highly procedural and focus singularly on modifying observable behaviours.
Psychoeducational Theories
Goal Centered Theory (Rudolf Dreikurs)
Look for functions of student behaviour and then negotiating appropriate ways for these needs to be met
Student discouragement is the primary cause of misbehaviour (group belonging)
Includes strategies for dealing with challenging behaviour and vulnerable students
Can be applied to a whole school setting (SWPBS)
Prevention of misbehavior is preferred over intervention
Goal Centered Theory: Practices
Develop a democratic teaching style
Establish mutual respect and valuing
Identify and respond to student strengths and abilities
Use encouragement to minimise discouragement and meet students’ need to belong and be valued
Apply safe natural and negotiated logical consequences
Use of regular whole-class discussions about rules, consequences, challenges, and achievements
Goal Centered Theory: Intervention
Identify the function of the behaviour (attention, power, revenge, avoidance)
Assist students in understanding their misbehaviour and motivation
Assist students in pursuing positive goals to meet their need to belong
Encourage the discouraged
Encourage students to acknowledge, value, and enact logical consequences (restitution, not punishment)
GCT: Criticisms
Lacks a sound evidence base
Students may be unable/unwilling to recognise their motives
Teachers may not have the training to recognise complex motives
Not compatible with more autocratic models, may be difficult to enact with very challenging students
Must have acceptance from the school community
Choice Theory
(William Glasser)
All behaviours are to satisfy a need (belonging, control, freedom, fun)
Developmental approach to behaviour management (non-coercive)
Motivation is intrinsic, only the individual can control where and how this motivation is directed and applied.
Student boredom, frustration, and inappropriate behaviour in schools are a product of learning environments that fail to satisfy basic needs through appropriate behaviours.
Choice Theory
Positive Practices
Recognise and respond to your responsibility to create a quality school where students’ basic needs are best met, and respect is central to teacher-student relationships
Develop a management style that focuses on facilitating learning.
Adopt cooperative learning strategies as a priority pedagogy
Choice Theory
Intervention
Acknowledge that the locus of the problem behaviour lies with the school/classroom environment and teacher/student relationships
Rebuild positive relationships between students and teachers by restructuring teaching/learning practices
Engage individual students in problem-solving meetings
Choice Theory
Challenges
Best implemented in a school-wide context
Takes considerable time and effort to plan and implement
Focuses on long-term change, so short term issues may not be adequately addressed
Offers few options for dealing with the behaviours of very challenging students (just rebuilding relationships)
Lacks a strong research base
Cognitive Behavioural Theories
Cognitive Behavioural Theory
(Jane Kaplan & Joseph Carter)
Individuals make choices about their behaviour
Individuals are self-directed, not passive responders to external influences
Choices are influenced by consequences, social context, values, motivation, problem-solving skills, self-organisational skills and interpretation of feedback from others
Cognitive Behavioural Theory
Focuses on developing students’independent cognitive skills in managing behavioural problems to support students to control their own thinking and feelings so that they can better appraise what they want, are doing, and thinking
Successful social and academic engagement is dependent upon emotions, beliefs, abilities, and skills
The development of constructive thinking habits helps individuals to regain control over their emotions and behaviours, and can reduce stress and improve mental health
Cognitive Behavioural Theory
Positive Practice
Help students to understand their thinking processes and gain self-control skills
Actively collaborate with students to select behavioural goals
Authority without coercion; earning and giving respect
A facilitative learning environment where students are encouraged to manage themselves and success is valued
Employing strategies such as rewards and punishment, but secondary to social reinforcement
Cognitive Behavioural Theory
Intervention
Identify students who might benefit from this more intensive intervention
Assess which skills students need, and implement a training program to teach these skills
Implement cognitive training, which involves demonstration, rehearsal, opportunities for use (application), and reinforcement
Ensure that interventions include transfer and generalisation activities
Cognitive Behavioural Theory
Challenges
Conflict between improving student motivation with an internal locus of control and using rewards and punishments (external locus of control)
Lack of emphasis on emotions as motivating factors may factors may lessen student engagement in CBT
Evidence base is conflicting
Mostly suited to more cognitively mature children and adolescents
Behavioural Theories
Assertive Discipline
(Lee & Marlene Canter)
Classroom discipline plan to maintain order and facilitate learning and teaching
Teachers must be assertive and exercise their rightful duty to control students by setting clear behavioural limits
Clear system of rewards and sanctions (teachers own classrooms, students do not)
Compliance (obedience) provides psychological safety for students
Student misbehaviour is caused by unstable home lives
Assertive Discipline
Positive Practice
Establish an ordered an productive teaching learning environment (includes good curriculum and pedagogy)
Design and teach a comprehensive discipline plan with positive and negative consequences
Get to know students’ names and interests
Focus on helping students achieve academic success
Invoke negative consequences in a calm, matter-of-fact way
Assertive Discipline
Intervention
Identify students who are not responding to the class discipline plan
Calmly but publicly reiterate rules, expectations, consequences
Engage closely with these students to ensure they understand their misbehaviours and consequences for continued noncompliance (Outside of class time)
Develop an individualised behaviour plan with the student
Assertive Discipline
Challenges
Not rigorously evidence-based
Presumes absolute teacher authority (no democratic principles or student rights)
No pathways for student self-discipline
May change behaviours, but doesn’t change the reason for them
Applied Behaviour Analysis
(Paul Alberto & Anne Troutman)
Based on Skinner- behaviours are controlled by setting events, antecedents, and consequences.
Reinforcing and punishing behaviours can increase or decrease their frequency, intensity, or duration
Behaviours are observable, functional, and purposeful
The classroom environment should be changed to improve behaviours
Applied Behaviour Analysis
Positive Practices
Establish classroom order so that students can be successful at learning
Use a direct approach to teaching (as opposed to constructivism)
Focus instruction on increasing desirable learning behaviours and skills, and decreasing behaviours which inhibit learning
Apply ABA practices in the least intrusive and restrictive way
Applied Behaviour Analysis
Intervention
Conduct data-based baseline assessment of targeted behaviours and define them accurately
Implement an intervention, monitor progress
Manipulate antecedents to impact the consequences of the target behaviours
Increase the reinforcement of the desired behaviours
Punish misbehaviour in the the least intrusive manner
Include training for generalisation
Management Styles: Authoritarian
Places firm limits and controls on the students.
This teacher rarely gives hall passes or recognizes excused absences.
Vigorous discipline and expected swift obedience.
Students need to follow directions and not ask why
Students do not interrupt the teacher- verbal exchange and discussion are discouraged
Authoritative
Teacher places limits and controls on the students but simultaneously encourages independence.
This teacher often explains the reasons behind the rules and decisions.
If a student is disruptive, the teacher offers a polite, but firm, reprimand.
Open to considerable verbal interaction, including critical debates.
Democratic
Students have the rights of freedom, justice and equality
Class meetings are used to make decisions about important matters, such as setting rules
Students are encouraged to voice their opinions and contribute to class
Teacher maintains a professional approach to consequences and assists the student in recovering from his behaviour, getting back on track, and doing something different next time
Indifferent
Teacher places few demands, if any, on the students and appears generally uninterested.
Often feels that class preparation is not worth the effort.
Field trips and special projects are out of the question- too much work.
May use the same materials, year after year
Classroom discipline is lacking; teacher may lack the skills, confidence, or courage to discipline students
Laissez-Faire
Teacher places few demands or controls on the students.
Accepts the students’ impulses and actions; not and is less likely to monitor their behaviour.
The teacher strives not to hurt the students’ feelings and has difficulty saying no or enforcing rules.
If a student disrupts the class, the teacher may assume that the student is not getting enough attention.
Inconsistent discipline.
https://doi.org/10.1177/1534508417737516
Assessment for Effective Intervention
2018, Vol. 43(4) 212 –226
© Hammill Institute on Disabilities 2017
Reprints and permissions:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/1534508417737516
aei.sagepub.com
Article
To encourage prosocial student behavior, education profes-
sionals (e.g., teachers, paraprofessionals, out-of-school
time [OST] staff) employ research-based, Tier 1 behavior
management practices (Epstein, Atkins, Cullinan, Kutash,
& Weaver, 2008; Newcomer, Colvin, & Lewis, 2009;
Simonsen, Fairbank, Briesch, Myers, & Sugai, 2008).
Successful implementation of these Tier 1 practices requires
adults to establish three to five behavior expectations that
are defined across settings and activities and explicitly
taught to students (Simonsen et al., 2008), and provide high
levels of behavior-specific praise and low rates of correc-
tion (e.g., Sutherland, Wehby, & Copeland, 2000). High
levels of behavior-specific praise and low levels of correc-
tions are associated with improved on-task behavior (Partin,
Robertson, Maggin, Oliver, & Wehby, 2009; Sutherland
et al., 2000), while the use of consistent expectations as a
component of Positive Behavior Interventions and Supports
(PBIS) has been found to increase compliance and reduce
rates of problem behavior in classrooms, hallways, and
schools generally (Bradshaw, Koth, Thornton, & Leaf,
2009; Leedy, Bates, & Safran, 2004). To encourage high
rates of praise and references to behavior expectations and
simultaneously minimize the use of correction, educators
engage in active supervision by moving around the setting,
observing, and interacting with students (Colvin, Sugai,
Good, & Lee, 1997). Active supervision is associated with
lower rates of problem behavior (Colvin et al., 1997; Lewis,
Colvin, & Sugai, 2000). In spite of ample evidence support-
ing their effectiveness, these practices are applied inconsis-
tently in schools and OST settings, thereby limiting their
potential to affect student outcomes (Reddy, Fabiano,
Dudek, & Hsu, 2013b; Ruberto, 2015).
Evaluating the Implementation of Tier
1 Behavior Management Practices
To support the delivery of effective practices, it is necessary
to understand the extent and circumstances under which
they are implemented, which can be done through treatment
integrity assessment. As this is an emerging research area,
there are few well-established implementation measures,
even for behavior management strategies (Collier-Meek,
Fallon, & Gould, accepted; Gresham, 2014; Sanetti &
Kratochwill, 2009). In the sections that follow, we summa-
rize the multistep process for developing treatment integrity
737516AEIXXX10.1177/1534508417737516Assessment for Effective InterventionCollier-Meek et al.
research-article2017
1University of Massachusetts Boston, USA
2University of California, Riverside, USA
3University of Chicago, IL, USA
Corresponding Author
:
Melissa A. Collier-Meek, Department of Counseling and School
Psychology, University of Massachusetts Boston, 100 Morrissey Blvd.,
Boston, MA 02125, USA.
Email: mel.colliermeek@umb.edu
Development and Initial Evaluation
of the Measure of Active Supervision
and Interaction
Melissa A. Collier-Meek, PhD, BCBA1, Austin H. Johnson, PhD, BCBA2,
and Anne F. Farrell, PhD3
Abstract
Implementation of research-based, Tier 1 behavior management strategies can be monitored to provide data-driven
feedback and in support of integrity. The Measure of Active Supervision and Interaction (MASI) was developed to measure
four behavior management practices (i.e., Praise, Correction, References to Behavior Expectations, Active Supervision)
using systematic direct observation. This study was designed to address research questions related to reliability and validity
by applying the MASI to evaluate staff behavior in seven out-of-school time programs. Findings indicate that two raters
can complete the MASI with high agreement. Ratings are attributable largely to desirable sources of variance, and content
validators positively rate the measure. Results are nonsignificantly correlated with established implementation measures
for Positive Behavior Interventions and Supports.
Keywords
observation
https://us.sagepub.com/en-us/journals-permissions
https://aei.sagepub.com
mailto:mel.colliermeek@umb.edu
http://crossmark.crossref.org/dialog/?doi=10.1177%2F1534508417737516&domain=pdf&date_stamp=2017-11-22
Collier-Meek et al. 213
tools and review two types of behavior management mea-
sures (i.e., self-report tools and observational measures)
before describing gaps in the literature.
Tools Developed Per Treatment Integrity
Assessment Guidelines
Current recommendations for treatment integrity assess-
ment can be described in five steps: operationalize interven-
tion components, consider varied dimensions (e.g.,
adherence, quality), select an assessment method (e.g.,
observation, permanent product review), determine a rating
format (e.g., dichotomous, Likert-type scale), and sum rat-
ings for a total (Collier-Meek, Fallon, Sanetti, & Maggin,
2013; Gresham, Dart, & Collins, 2017). Researchers apply
these procedures to assess implementation of behavior
management practices; indeed, interobserver and intraob-
server agreement data provide initial evidence for the reli-
ability of the resulting data (Collier-Meek, Sanetti, & Boyle,
2016; Gresham et al., 2017).
Beyond reliability, student outcome data provide initial
evidence of the convergent validity of treatment integrity
measures when higher levels of behavior management and
student prosocial behavior correlated as theoretically
expected (e.g., significant positive correlations; Collier-
Meek et al., 2016). In spite of this evidence in support of
validity, Sanetti and Collier-Meek (2014), evaluated multi-
ple treatment integrity measures and noted method bias and
contextual variations; estimates varied depending on the
observation method, current classroom activity, and other
factors (e.g., student behavior, observation timing). Thus,
the ability of the current treatment integrity guidelines to
steer the development of measures that produce data that
sufficiently and soundly assess key aspects of behavior
management remains unclear.
Established Self-Report Tools
Several self-report tools have been developed to facilitate
implementers’ assessment of Tier 1 behavior management.
PBIS practitioners frequently use the Classroom Management
Self-Assessment Revised (Simonsen, Fairbank, Briesch, &
Sugai, 2006), a teacher self-report tool that supports reflec-
tion on and improvements in classroom management imple-
mentation. This tool includes space to tally positive and
negative student contacts to calculate a ratio of positive-to-
negative interactions, as well as 10 items pertaining to class-
room management practices (e.g., classroom structure,
expectations, active engagement) which are rated as present
or absent. With the Classroom Ecology Checklist, the imple-
menter rates the extent to which specific behavior manage-
ment practices aligned with six domains are present (i.e., no,
somewhat, yes; Reinke, Herman, & Sprick, 2011). This
checklist was designed to be used in conjunction with other
data sources (e.g., implementer interview, observation) in the
context of the Classroom Check Up, an established method
of consultative support (Reinke, Lewis-Palmer, & Merrell,
2008). Although these measures are valuable for self-reflec-
tion as one component of treatment integrity assessment, find-
ings consistently indicate that implementers overestimate
their level of implementation, suggesting that self-appraisal
may not be appropriate method to estimate treatment integrity
(Wickstrom, Jones, LaFleur, & Witt, 1998).
Established Observation
Measures
Observational measures of teacher classroom practices with
well-established psychometric properties include the
Classroom Assessment Scoring System (CLASS; Pianta, La
Paro, & Hamre, 2008) and the Classroom Strategies Scale–
Observer Form (CSS-OF; Reddy, Fabiano, Dudek, & Hsu,
2013a). The CLASS posits a multilevel latent structure and
measures the quality of classroom processes, including var-
ious features of student–teacher interactions. Items are rated
on a 7-point scale and produce scores along three dimen-
sions (emotional supports, classroom organization, and
instructional supports) after an extended period of observa-
tion (Pianta & Hamre, 2009). The CSS-OF is a measure of
instructional and behavior management practices that
includes frequency counts, Likert-type scaling, and check-
list items that are rated following two 30-min observations
(Reddy et al., 2013a). The CLASS and CSS-OF are both
general and comprehensive measures of teacher implemen-
tation, and classroom characteristics are used periodically
to evaluate classroom practices (including Tier 1 behavior
management) and facilitate data-driven professional
development.
Gaps in Tier 1 Behavior Management
Implementation Assessment Literature
As is the case with all assessment decisions, selecting an
instrument to examine Tier 1 behavior management prac-
tices requires consideration of context, purpose, and feasi-
bility. Both the CLASS (Pianta et al., 2008) and CSS-OF
(Reddy et al., 2013a) are excellent candidates, as they
were systematically developed, demonstrate sound psy-
chometric properties, and provide reliable estimates of
practice. To facilitate classroom teachers’ self-assessment
and planning for improvement, existing self-report mea-
sures (e.g., Classroom Management Self-Assessment
Revised; Simonsen et al., 2006) could be appropriate and
feasible tools, despite concerns about the accuracy of self-
report (e.g., Wickstrom et al., 1998). A range of tools
developed under existing treatment integrity guidance
might help estimate intervention implementation; how-
ever, the means and form of assessment depends on assess-
ment and context factors (e.g., purpose, target behaviors,
214 Assessment for Effective Intervention 43(4
)
observation opportunities and duration, activities under-
way; Sanetti & Collier-Meek, 2014). There remains a need
for a flexible, sensitive, and formative way to evaluate
delivery of discrete behavior management practices that
can be utilized across settings, within and beyond the
classroom (hallways, playgrounds, OST). Whereas the
above-described, established observational measures pro-
duce sound estimates of overall classroom management
and teacher–student relations, emerging research employs
systematic direct observation (SDO) to evaluate more dis-
crete implementer behaviors. In this case, specific prac-
tices (e.g., praise statements) serve as target behaviors for
instruction, observation, and measurement within a data-
driven paradigm (Simonsen, MacSuga, Fallon, & Sugai,
2013). Research on the psychometric properties of SDO
approaches is limited and additional work is needed to
evaluate the reliability, validity, feasibility, and utility of
data collected via SDO to assess key behavior manage-
ment practices.
SDO of Tier 1 Behavior Management
Implementation
SDO is a well-established, flexible measurement method
wherein behavior is observed during a specified time period
and systematic data collection procedures are applied to
evaluate the specific dimensions of target behaviors
(Cooper, Heron, & Heward, 2007; Suen & Ary, 1989). As it
is not always possible or feasible to conduct continuous
observation, studies often use time sampling or interval
recording, during which the occurrence or nonoccurrence of
a target behavior during a specified interval is coded accord-
ing to specific decision rules (e.g., coded if present during
entire interval; Cooper et al., 2007). Researchers have dem-
onstrated that momentary time sampling, partial interval
recording, and whole interval recording produce varying
levels of accuracy depending on the presence of mixed
intervals, where behavior both occurs and does not occur
during a single interval (e.g., academically engaged and
unengaged during a 30-s interval; Suen & Ary, 1989).
Recent work focuses on sources of variance that emerge
across these three methods (see Johnson, Chafouleas, &
Briesch, 2017) and the reliability of behavior estimates
when averaged over multiple time periods (see Ferguson,
Briesch, Volpe, & Daniels, 2012).
SDO has often been applied to evaluate student behav-
iors (Cooper et al., 2007) such as academic engagement
(Johnson et al., 2017) and disruptive and off-task behaviors
(Shapiro, 2011). As the application and measurement of
behavior management strategies such as praise, references
to behavior expectations, and correction can be defined and
directly observed, they can be evaluated using SDO meth-
odology (Cooper et al., 2007). To evaluate the effectiveness
of varied implementation supports within a single case
methodology, investigators have monitored teachers’ rates
of specific praise (e.g., Simonsen et al., 2013), frequency of
student interactions (Colvin et al., 1997), and ratios of
praise to corrective statements (Ruberto, 2015). In these
investigations, initial evidence of reliability was established
through acceptable levels of interobserver agreement
(Ruberto, 2015; Simonsen et al., 2013); however, additional
research is needed to evaluate the soundness of SDO for
implementer behaviors.
Purpose of Study
This study addresses a gap in the literature by detailing
the development and initial investigation of the Measure
of Active Supervision and Interaction (MASI). Inasmuch
as the MASI applies SDO (i.e., momentary time sam-
pling, frequency count) to assess the implementation of
four discrete behavior management practices (i.e., Praise,
Corrections, References to Behavior Expectations, and
Active Supervision), it emerges from and reflects ele-
ments of existing and related measurement traditions;
however, it focuses on the enactment of four specific
behaviors that comprise key components of Tier 1 behav-
ior management in nonclassroom settings. We attempted
to appraise the reliability and validity of ratings from the
MASI and to evaluate the utility of this measure among
OST staff. Specifically, we sought to address three
research questions:
Research Question 1: To what extent do scores derived
from trained raters’ use of the MASI exhibit consistency
between raters and across sessions?
Research Question 2: To what extent do practitioners
and researchers report that the MASI adequately
represents components of research-based behavior
management?
Research Question 3: To what extent do ratings on the
MASI agree with ratings on the School-Wide Evaluation
Tool (SET) and Benchmarks of Quality (BOQ) program-
wide implementation measures adapted for the OST
setting?
Method
Participants and Context
Participants were involved in the study in two distinct
phases: content validation and observations. Initial measure
development and content validation preceded the reliability
and validity appraisal, which was conducted across several
OST settings. We used two program-wide measures of
implementation as part of the overall appraisal of imple-
mentation and expected the collective performance of indi-
vidual OST staff to be related to, yet distinct from, the
Collier-Meek et al. 215
summary scores on program-wide implementation (diver-
gent validity).
Content validation. Prior to initiating the study, we recruited
five researchers and one practitioner to participate in con-
tent validation of the MASI. Content validators were
recruited based on their expertise and/or experience in OST
programs and PBIS. Four held doctoral degrees (66.7% in
special education and school psychology) and two held
master’s degrees (33.3% in social work and human devel-
opment and family studies). Two (33.3%) were involved in
a larger OST and PBIS project (Farrell & Collier-Meek,
2014) but were not involved in measure development. All
content validators were female.
Observations. This study was conducted within the imple-
mentation of a PBIS intervention (Positive Behavior Sup-
port in Out-of-School Time, Positive BOOST [Behavior in
Out-of-School Time]; Farrell & Collier-Meek, 2014) across
seven distinct programs in a northeastern state. In vivo
observations were conducted of OST professionals (N =
147). No demographic information on the OST profession-
als is available. All OST programs were funded by 21st
Century Learning Community Grants; thus, these programs
involved academic enrichment, recreation activities or
social-emotional learning, and family literacy activities for
students attending high-poverty school districts. Programs
were operated by public schools (n = 4), local nonprofit
organizations (n = 2), and a charter school (n = 1). The aver-
age OST program operated for 12.8 hr per week (range =
10–16) for a total of 34.9 weeks during the school year
(range = 30–40). Additional information about the OST
sites and town characteristics are presented in Table 1.
Within the study, three raters completed each MASI; two
were female graduate students and one was a female under-
graduate student. All were enrolled in education, psychology,
and human development programs at a university in the
Northeast and were specifically trained to participate in the
implementation and activities research. The two graduate
students had prior training and experience with SDO, whereas
the undergraduate student had no relevant training or experi-
ence prior to beginning the study. Raters were actively
involved in Positive BOOST, a project to incorporate PBIS
into OSTs (see Farrell & Collier-Meek, 2014). Two raters
were present at 27.21% of observations (n = 40).
Measures
MASI. As suggested above, the MASI was developed as a
measure of four distinct components of implementation of
Tier 1 behavior management practices among individual
OST providers. A single administration of the MASI occurs
during a 60-min observation interval divided into three
20-min observation periods. Each observation period con-
cerns an individual OST professional and thus results in
data corresponding to individual staff, enabling overall esti-
mates of staff behavior during the observation interval.
Prior to the onset of the interval, raters record general infor-
mation including program name, setting, activity, number
of students present, and rater name. The rater then randomly
selects three of the OST professionals present to be
observed. The order of observation is also random. If three
or fewer professionals are present for the observation, then
only random ordering of observations takes place. Data per-
taining to each professional are collected within four sec-
tions that incorporate the evaluation of five distinct
behaviors (see Table 2). The MASI summarizes observa-
tions in these four areas and does not contain a summary or
overall score, as the component behaviors are discrete and
not theorized to contribute to a larger construct per se.
We define Move, Scan, and Interact (MSI) as an OST
professional moving throughout the space, scanning student
Table 1. OST Program Sites, Observation Frequency and Percentage, and Town Characteristics.
Site
Observations
District Characteristics
Total Sample
(N = 147)
Paired Sample
(n = 40)
n % n % Town Typea
Number of
Schoolsa
% of Students Eligible for
Free/Reduced Lunchb
A 14 9.52 3 7.50 City—Midsize 36 98.8
B 27 18.37 3 7.50 Town 9 75.5
C 56 38.10 11 27.50 City—Small 15 70.1
D 14 9.52 8 20.00 City—Small 18 45.0
E 9 6.12 3 7.50 City—Midsize 50 90.7
F 9 6.12 3 7.50 City—Midsize 48 77.8
G 18 12.24 9 22.50 Suburb—Large 12 66.6
Note. OST = out-of-school time.
aData retrieved from https://nces.ed.gov/ccd/districtsearch/index.asp. bData retrieved from http://datacenter.kidscount.org/
https://nces.ed.gov/ccd/districtsearch/index.asp
http://datacenter.kidscount.org/
216 Assessment for Effective Intervention 43(4)
behavior, or interacting with students, and evaluate it using
momentary time sampling at 15-s intervals within a 10-min
observation period. Praise, Correction, and Reference to
Behavior Expectations (called Behavior Expectations from
this point forward) are evaluated using a frequency count
for a 10-min interval. We define Praise as the OST profes-
sional providing praise or otherwise acknowledging the stu-
dent for desired behaviors and Correction as reprimanding
or redirecting student(s) when undesired behavior is exhib-
ited. Behavior Expectations are defined as an OST profes-
sional referencing program behavior expectations when
engaging with student(s). After two 10-min observations
periods occur, the observer makes two types of summative
ratings. First, Praise, Correction, Behavior Expectations,
and Nuisance Behaviors are evaluated through a checklist
ratings of adherence to specific behavioral components.
That is, the rater checks (or does not check) items on a brief
list of narrative descriptors. These ratings are intended to
provide additional illustrative detail to complement and
provide context for the quantitative data (frequency counts)
provided when using the MASI to provide feedback. The
definitions for Praise, Correction, and Behavior Expectations
remain the same, while Nuisance Behavior is defined as
undesired behaviors exhibited by students that indicate a
mild disruption, have limited impact, and are not dangerous
or escalating. Finally, the rater may record any relevant nar-
rative notes about the observation. After all three OST pro-
fessionals have been observed and all data have been
recorded, the rater summarizes the overall findings of the
observation by behavior and across the professionals. The
final version of the MASI is included in the appendix.
BOQ-OST. The BOQ (Kincaid, Childs, & George, 2010) is a
measure intended to assist PBIS teams to identify areas of
strength and areas of implementation in need of improve-
ment. With permission, we adapted the BOQ, aligning it
with the OST context. The original school-based coaches
version of the BOQ includes 10 sections with 53 specific
items to be rated on a rubric with scales from 0 to 1 and 0 to
4 based on operationally defined criteria (Kincaid et al.,
2010). Internal consistency, reliability, and validity analy-
ses indicated that the BOQ can produce data with adequate
psychometric properties (see Cohen, Kincaid, & Childs,
2007). The systematic adaptation of the BOQ-OST included
the development of an additional section, “Set the Stage”;
three items that reflect the teaching of expectations in the
setting (e.g., posting of expectations, explicit instruction,
reinforcing routines); and the adjustment of item wording
Table 2. Measure of Active Supervision Behavior Definitions, Examples, Nonexamples, and Assessment Method.
Behaviors Definitions Examples Nonexamples Assessment Method
Move, Scan, and
Interact
OSTP actively moving
throughout the
space, scanning
student behavior,
or interacting with
student(
s)
•• OSTP walking through
classroom chatting with
students.
•• OSTP actively looking
throughout room
monitoring student behavior.
•• OSTP talking with
other staff.
•• OSTP reading a book.
Momentary time sampling
at 15-s intervals.
Praise OSTP praises or
acknowledges
student(s) for desired
behavio
rs
•• OSTP stating “Nice job on
your homework”
•• OSTP stating “I like how you
helped Johnny with that art
project”
•• OSTP stating neutral
or negative statements
Frequency during 10-min
observation period.
Check behavior
characteristics.
Correction OSTP reprimands,
corrects student(s)
when undesired
behavior is exhibited
•• OSTP stating “Next
time, don’t run into the
classroom”
•• OSTP stating “Stop yelling”
•• OSTP stating neutral
or positive statements.
Frequency during 10-min
observation period.
Check behavior
characteristics.
Behavior
Expectations
OSTP refers to
program behavior
expectations when
engaging with
student(s).
•• OSTP stating “You brought
all your books—that’s Be
Prepared”
•• OSTP stating “Next time,
please Be Respectful and
be quiet when entering the
library”
•• OSTP stating “Keep
it up”
•• OSTP stating “That’s
not acceptable”
Frequency during 10-min
observation period.
Check behavior
characteristics.
Nuisance
Behaviors
Undesired behaviors,
mild disruption,
not dangerous, not
escalating, limited
impact.
•• Student repeatedly tapping
his pencil.
•• Student getting out of her
seat repeatedly.
•• Students in a physical
altercation.
•• Student engaging
appropriately.
Check behavior
characteristics.
Note. OSTP = out-of-school time professional.
Collier-Meek et al. 217
across the measure to reflect the OST context. For instance,
references to “students” were changed to “participants,”
and “classroom systems” was changed to “setting specific
systems” to correspond to the OST context.
SET-OST. Similarly, the School-Wide Evaluation Tool (SET;
Sugai, Lewis-Palmer, Todd, & Horner, 2001) was adapted
with permission for this study to be aligned with the OST
context. The original SET includes seven sections that
include 30 items. To complete the SET, the evaluator
reviews permanent products (e.g., handbooks, behavioral
data) and completes systematic observations and interviews
guided by the SET procedures. Based on the information
gathered, items are rated and a total score is obtained. Inter-
nal consistency, reliability, and validity analyses have indi-
cated that the SET can produce data with adequate
psychometric properties (see Horner et al., 2004). The sys-
tematic adaptation of the System-Wide Evaluation Tool in
Out-of-School Time (SET-OST) for this study included
slight modifications to the data collection procedures and
the adjustment of the item wording to reflect the OST con-
text. For example, the suggested permanent products for
review were expanded to include additional materials
appropriate to OSTs (e.g., program handbook). Wording
revisions included changing references from “students” to
“participants,” and “referrals to the office” was changed to
“referrals to the OST coordinator.”
Procedures
We developed the MASI by generating items consistent
with the Tier 1 behavior management literature, reflecting
the core components of PBIS, and aligned with the existing
evidence on measurement. Initial content validity appraisal
further informed and assisted the refinement of the MASI
format and items, including operational definitions of the
four key behaviors. Once the MASI was finalized, raters
were trained to use the measure and then completed obser-
vations in OST programs. The content validation, measure
training, and observations are described below.
Content validation. To provide evidence of content validity,
five researchers and one practitioner familiar with OST pro-
grams and PBIS reviewed the initial version of the MASI. To
do so, reviewers responded to six questions on a 7-point Lik-
ert-type scale ranging from strongly disagree (1) to strongly
agree (7) using SurveyMonkey, an online survey website.
Questions addressed (a) the clarity of items, (b) the clarity of
directions, (c) the feasibility of the directions and procedures,
(d) the alignment of the measure with PBIS, (e) the appropri-
ateness of the measure for the elements of implementation
assessed, and (f) the appropriateness of the measure for OST
programs (McCoach, Gable, & Madura, 2013). In addition,
raters provided specific comments in an open response format
in addition to general feedback. These ratings are described in
the “Results” section. Based on the responses and feedback of
the reviewers, several changes were made to the MASI. First,
the title of the measure was revised to be specific to active
supervision and interaction, rather than PBIS as a whole. This
change was prompted by a rater who indicated that the origi-
nal name was too broad, as the measure only addressed some
aspects of PBIS. Second, directions were revised to clarify
aspects of the measure that the reviewers’ indicated were con-
fusing. This change was prompted by a relatively lower rating
regarding the directions. Finally, minor copy edits noted by
the reviewers were addressed.
Rater training. All three observers who participated in this
study underwent training to criterion in the MASI prior to
engaging in data collection. Training in the MASI included
a didactic introduction to SDO generally and the MASI in
particular. Orientation to SDO included instruction on types
of behavior and varied decision rules for direct observation.
Then, raters were introduced to the MASI (a) sections and
format, (b) behavior definitions including examples and
nonexamples, (c) procedures (i.e., randomly picking staff
and then completing the measure), and (d) ratings (i.e.,
momentary time sampling, frequency count, checklist rat-
ings). Raters then completed multiple ratings of a video clip
of students and an OST professional from an actual OST
program. Percentage agreement was calculated for momen-
tary time sampling and checklist ratings by behavior, while
total agreement was calculated by behavior for frequency
count data. Once 90% agreement was achieved for all rating
types and behaviors, raters were deemed ready to utilize the
measure in programs.
Observations. MASI observations occurred within the con-
text of a larger study designed to support OST program
implementation of Positive BOOST (Farrell & Collier-
Meek, 2014), an approach to staff development that includes
PBIS curricula and materials adapted for the OST context.
Within the present study, different intensities of implemen-
tation support were delivered to OST leadership as part of
an effort to evaluate the level of training needed for success-
ful adoption. Most observations occurred following initial
implementation of Positive BOOST (65.98% in total sam-
ple; 72.50% in paired sample with two raters). Throughout
study phases, raters conducted observations at consistent
days and times, though at varied frequencies across pro-
grams (see Table 1). In the total sample, most observations
occurred in a classroom (42.85%) or cafeteria (23.12%)
with an average of 15.37 participants (SD = 13.79) present.
Participants engaged in activities such as homework
(32.65%), athletics/games (27.21%), or academic content
(12.24%). In the paired sample, most observations occurred
in a classroom (37.50%) or cafeteria (35.00%) with an aver-
age of 13.42 students (SD = 10.23) engaged in activities
218 Assessment for Effective Intervention 43(4)
such as homework (15.00%), athletics/games (45.00%), or
academic content (15.00%).
As described earlier, before each observation, raters ran-
domly selected three OST professionals to observe using a
random number generator. The raters then completed the
form background information, reviewed the behavior defi-
nitions as needed, and prepared their stopwatches. For each
of the three OST professionals, the raters independently
recorded the (a) prevalence of MSI behavior using momen-
tary time sampling for 10 min; (b) frequency of Praise,
Correction, and Behavior Expectations behavior for 10
min; (c) checklist ratings of Praise, Correction, Behavior
Expectations, and Nuisance Behaviors; and (d) any relevant
narrative notes. Following the completion of the MASI, the
raters summarized their observations. The SET-OST and
BOQ-OST were completed at three time points for each
program by the same raters.
Analyses
To evaluate the utility of the MASI, reliability and validity
analyses were conducted.
Reliability. As described by Hintze (2005), the reliability of
data derived from direct observation instruments can be
characterized and assessed in multiple ways. Two of the
most relevant for research situations are (a) interobserver
agreement and (b) intraobserver reliability. We conducted
both on the paired sample (n = 40 observations with two
raters).
Interobserver agreement. We calculated three interob-
server agreement indices. Percent agreement (i.e., fre-
quency of intervals with agreement divided by the total
number of intervals) was calculated to evaluate the agree-
ment across raters and observation sessions for frequency
count and momentary time sampling data. For frequency
count data, the exact agreement for the entire 10-min
interval was required for that ratings to be considered in
agreement. Then, the total observations with agreement
was delivered by the total number of observations. Coef-
ficient Kappa (Cohen, 1960) was calculated to evaluate the
agreement across raters for the momentary time sampling
data. Two-way intraclass correlations for a single rater, also
referred to as ICC(C,1) (Shrout & Fleiss, 1979), were cal-
culated for frequency count data.
Intraobserver reliability. Although percent agreement,
Kappa, and ICC values provide distinct estimates for the
degree of consistency between raters, these calculations do
not account for the unique structure of the data collected
for this study; to wit, two to three randomly selected pro-
fessionals were observed per observation period. Individual
professional characteristics, circumstances during the over-
arching observation session, and the interaction between
these variables may each influence the resulting ratings.
For example, an OST program is observed on Monday,
Tuesday, and Wednesday by two simultaneous raters, and
three distinct, randomly selected professionals are observed
on each day. The Monday observation session may have
been a particularly difficult day due to a large number of
substitute professionals being present within the program.
In addition, the rater may be feeling particularly sympa-
thetic and provide ratings that are distinct from those of the
other rater. Finally, there may be interactions between these
potential sources of variance; raters may rate specific types
of professionals in different ways, creating a rater by pro-
fessional interaction that influences resulting ratings. Thus,
the day of the observation (session, or s) the behavior of
each OST professional being rated (professional, or p), the
rater (r), and the interaction between these variables may be
expected to influence the rating that was given to each OST
professional.
To address the complex nature of these data alongside
the multiple potential sources of variance in ratings and
their interactions, we use variance partitioning analyses to
determine the extent to which variance in ratings was influ-
enced by desirable (e.g., actual variations in the behavior of
the object of measurement) versus undesirable (e.g., rater
variations) sources (Briesch, Swaminathan, Welsh, &
Chafouleas, 2014). Specifically, a nested two-facet model,
(p:s) × r, was utilized to determine the amount of variance
in a given dependent variable attributable to (a) OST pro-
fessionals, who were nested within observation session, and
(b) rater, treated as randomly selected, and their interac-
tions. It is critical to note that the nested structure of the data
precludes an ability to disentangle the effect of professional
from the interaction between professional and session. The
measurement model was applied to ratings of MSI, Praise,
and correction behaviors as scored using the MASI-OST.
Given the extremely limited variance observed in the
Behavior Expectations ratings, these data were not sub-
jected to variance partitioning analyses. We transformed all
ratings into prevalence/rate for analyses; MSI was expressed
as the percentage of intervals scored as an occurrence of the
target behavior, while the frequency counts of Praise and
Correction behaviors were each divided by the observation
duration (i.e., 10 min).
Validity. To evaluate the extent to which the MASI data
were reflective of the concepts purportedly being assessed
(Shadish, Cook, & Campbell, 2002), three types of validity
were assessed. Content validity were evaluated through
analysis of the content validation data. Convergent validity
and discriminant validity were evaluated through compari-
sons of MASI, SET-OST, and BOQ-OST data. The SET-
OST and BOQ-OST were selected for this comparison,
because of the fact that program-wide implementation is
Collier-Meek et al. 219
reliant on the treatment integrity of individual staff. There-
fore, we expected a relationship between these data sources.
These types of validity and the associated analyses are
described further below.
Content validity. Content validity involves whether the
items are representative of the broader concept that it pur-
ports to measure (Hintze, 2005). The items from the con-
tent validation were used to provide an initial assessment
of content validity. Specifically, content validators rated
whether the measure was (a) well aligned with PBIS, (b)
appropriate for elements of implementation assessed, and
(c) appropriate for OST programs on a 7-point Likert-type
scale from very much disagree to very much agree. Further-
more, content validators rated (a) whether the items were
clear or understandable, (b) whether directions were clear
and understandable, and (c) whether directions were fea-
sible and appropriate to for the measure on a 7-point Likert-
type scale from very much disagree to very much agree.
Convergent validity. Convergent validity is an aspect of
construct validity that examines whether items correlate
as expected with particular variables (Hintze, 2005). Con-
vergent and discriminant validity (described next) were
evaluated using the Spearman correlation coefficient (i.e.,
Spearman’s rho) due to the ordinal nature of SET and BOQ
ratings, as well as the nonnormal distributions of mean
MASI rating; correlation coefficients were applied to mean
ratings from the MASI as paired with individual ratings
from the SET-OST and BOQ-OST. These analyses were
conducted using MASI ratings derived from the total sample
(N = 147 observations). Means were calculated to provide
a single summative rating against which to compare data
from each administration of the SET-OST and BOQ-OST.
Mean ratings were calculated for each of the four core MASI
behaviors for the relevant date ranges aligning to each SET-
OST and BOQ-OST administration, resulting in 24 rows
of observations aligned with the three SET-OST and BOQ-
OST administrations across eight programs (3 × 8 = 24).
We hypothesized that MSI would be modestly correlated
with SET-OST sections (a) system for rewarding expecta-
tions and (b) system for responding to violations, as well as
BOQ-OST sections (a) effective procedures for dealing
with discipline, (b) expectations and rules developed, (c) set
the stage, (d) reward/recognition program established, and
(e) setting-specific overall. We hypothesized that Praise
would be modestly correlated with SET-OST sections (a)
expectations defined, (b) expectations taught, and (c) sys-
tem for rewarding expectations, as well as BOQ-OST sec-
tions (a) effective procedures for dealing with discipline,
(b) expectations and rules developed, (c) set the stage, (d)
reward/recognition program established, (e) lesson plans
for teaching expectations/rules and routines, and (f) setting-
specific overall. We hypothesized that Correction would be
negatively correlated with the SET-OST section, a system
for responding to violations, as well as BOQ-OST section,
effective procedures for dealing with discipline. We hypoth-
esized that Behavior Expectations would be modestly cor-
related with SET-OST sections (a) expectations defined, (b)
expectations taught, and (c) system for rewarding expecta-
tions, as well as BOQ-OST sections (a) expectations and
rules developed, (b) reward/recognition program estab-
lished, (c) lesson plans for teaching expectations/rules and
routines, and (d) setting-specific systems. For all these
comparisons, modest correlations were expected to capture
the relationship between individual staff treatment integrity
and program-wide implementation.
Discriminant validity. Discriminant validity is an aspect of
construct validity that involves whether an item is not cor-
related with variables that it should not be correlated with
(Hintze, 2005). Overall, we hypothesized that MSI, Praise,
Correction, and Behavior Expectations would not be corre-
lated with SET-OST sections related to (a) monitoring and
decision-making, (b) management, and (c) broad support, as
well as BOQ-OST sections (a) PBIS leadership, (b) staff com-
mitment, (c) data entry and analysis plan, and (d) evaluation.
Results
Following overall descriptive statistics, the reliability and
validity of the data collected using the MASI, as evaluated
based on content validation, variance partitioning analyses,
and comparisons with other measures are described below.
Descriptive Statistics
Descriptive statistics across both samples are presented in
Table 3. For the total sample (N = 147), the mean percent-
age of intervals of MSI was 89.64% (SD = 14.49). In the
total sample, OST professionals, on average, used Praise (M
= 3.28) and Corrections (M = 3.46) at about the same level,
but infrequently referred to Behavior Expectations (M =
0.28). For the paired sample (n = 40), the mean percentage
of intervals of MSI was 90.70% (SD = 14.61). OST profes-
sionals, on average, praised 3.91 times and provided 3.25
corrections during the observations. In the paired sample,
OST professionals, on average, praised (M = 4.15) slightly
more often than they provided corrections (M = 3.98), but
relatively infrequently referred to behavior expectations
(M = 0.95).
Reliability
Interobserver agreement. In evaluating the paired sample for
interobserver agreement, overall percentage agreement find-
ings indicate that the MASI was independently completed
by two raters with high rates of agreement (see Table 3).
220 Assessment for Effective Intervention 43(4)
MSI was completed with 82.5% agreement, frequency
behaviors ranged from 100.0% (Behavior Expectations) to
82.5% (Correction) agreement, and behavior characteristics
were rated above 90% agreement, with seven of the behav-
ior characteristics at 100.0% agreements. A Kappa coeffi-
cient of .755 was observed for MSI, indicating moderate
levels of agreement beyond those expected from chance.
The two-way, single-person ICC values for consistency
between raters, calculated for ratings of 40 professionals
over 14 observation sessions, suggested that the frequency
ratings for Praise (ICC = .994, 95% CI = [.989, .997]), Cor-
rection (ICC = .983, 95% CI = [.969, .991]), and Behavior
Expectations (ICC = 1.000) were conducted with a high
degree of consistency.
Intraobserver reliability. In evaluating the paired sample for
intraobserver reliability, results of variance partitioning
analyses suggested that for the MSI, Praise, and Correction
variables, sources of rating variance were generally attrib-
uted to the behavior of the OST professional being observed
(which, due to its nesting within session, cannot be disen-
tangled from the effect of session on the professional, see
Table 4). Variance in ratings of MSI were completely attrib-
utable to the professional: session facet (100%), while 91%
of variance in Praise ratings was attributable to the profes-
sional: session facet. Rating variance in Correction behav-
ior was chiefly attributable to two sources: professional:
session (54.2%) and session (44.6%), with a small amount
of variance attributable to the residual term (1.2%). No vari-
ance in ratings for any of the three analyzed behaviors was
attributable to the rater facet, which is consistent with the
high agreement indices observed.
Validity
Content validity. The content validation items provided an
assessment of content validity. Content validators indicated
that they generally agreed items were clear and understand-
able (M = 6.33, SD = 0.81); slightly agreed that directions
were clear and understandable (M = 5.00, SD = 1.26), and
agreed directions were feasible and appropriate for the
measure (M = 6.00, SD = 0.63). Furthermore, content vali-
dators rated indicated that they agreed the measure was
well aligned with PBIS (M = 6.00, SD = 1.54), Agreed with
the appropriateness of the elements of implementation
assessed (M = 6.33, SD = 1.02), and agreed to very much
agreed that the measure was appropriate for an OST pro-
gram (M = 6.5, SD = 0.83).
Convergent validity. Spearman’s rank-order correlation coef-
ficients between the MASI and BOQ-OST and SET-OST
are presented in Table 5. After correcting for familywise
error with the Holm method, no correlations were statisti-
cally significant. The lowest corrected p value observed
was .160 (“Praise” with “BOQ: Reward/recognition pro-
gram established,” rho = .595). We hypothesized that MSI
Table 3. Descriptive Statistics and Reliability Across Measure of Active Supervision and Interaction Variables.
Behaviors
Descriptive Statistics
Total Sample
(N = 147)
Paired Sample
(n = 40) Reliability
M SD M SD % Agreement Kappa ICC–Consistency
Move, Scan, and Interact 89.64% 14.49 90.70% 14.61 82.5 .755 —
Frequency during 10 min observation
Praise 3.28 4.15 3.87 3.91 90.0 — .994
Correction 3.46 3.98 3.25 2.76 82.5 — .983
Behavior Expectations 0.28 0.95 0.42 1.38 100.0 — 1.0
00
Note. ICC = intraclass correlations.
Table 4. Variance Component Estimates and Percentages of Variance for (p:s) × r Model.
Variance Components
Move, Scan, and Interact Praise Corrections
Estimates % Estimates % Estimates %
Professional: session .019 100.0 .142 91.0 .045 54.2
Session .000 0.0 .013 8.3 .037 44.6
Rater .000 0.0 .000 0.0 .000 0.0
Session × Rater .000 0.0 .000 0.0 .000 0.0
Residual .000 0.0 .001 0.6 .001 1.2
Collier-Meek et al. 221
would be modestly correlated with SET-OST sections (a)
system for rewarding expectations and (b) system for
responding to violations, as well as BOQ-OST sections (a)
effective procedures for dealing with discipline, (b) expec-
tations and rules developed, (c) set the stage, (d) reward/
recognition program established, and (e) setting-specific
overall. Correlation analyses indicate the MSI was nonsig-
nificantly correlated with SET-OST sections (a) expecta-
tions taught (.540), (b) system for rewarding expectations
(.457), as well as the SET overall score (.441). MSI ratings
were also not significantly correlated with BOQ-OST sec-
tions (a) set the stage (.441) and (b) reward/recognition pro-
gram established (.494).
We hypothesized that Praise would be modestly corre-
lated with SET-OST sections (a) expectations defined, (b)
expectations taught, and (c) system for rewarding expecta-
tions, as well as BOQ-OST sections (a) effective procedures
for dealing with discipline, (b) expectations and rules devel-
oped, (c) set the stage, (d) reward/recognition program
established, (e) lesson plans for teaching expectations/rules
and routines, and (f) setting-specific overall. Correlation
analyses indicate the Reinforcement was not significantly
correlated with SET-OST sections (a) expectations taught
(.477) and (b) system for rewarding expectations (.566).
Praise ratings were also not significantly correlated with
BOQ-OST sections set the stage (.441), lesson plans for
teacher expectations/rules, and routines (.468), setting-spe-
cific systems (.530), and evaluation (.443) as well as the
BOQ overall (.485).
We hypothesized that Correction would be negatively
correlated with the SET-OST section, a system for respond-
ing to violations, as well as BOQ-OST section, effective
procedures for dealing with discipline. Correlation analyses
indicate the Correction was not significantly correlated with
SET-OST sections responding to violations (–.416) and
with BOQ-OST set the stage (.563).
We hypothesized that Behavior Expectations would be
modestly correlated with SET-OST sections (a) expecta-
tions defined, (b) expectations taught, and (c) system for
rewarding expectations, as well as BOQ-OST sections (a)
expectations and rules developed, (b) reward/recognition
program established, (c) lesson plans for teaching expecta-
tions/rules and routines, and (d) setting-specific systems.
Correlation analyses indicate the Behavior Expectations
Table 5. Spearman Correlations Between the Measure of Active Supervision and Interaction and the System-Wide Evaluation
Tool–OST and BOQ–OST.
PBIS Measures
Measure of Active Supervision and Interaction
Move, Scan, Interact Praise Correction Behavior Expectations
System-wide Evaluation Tool for OSTa
Expectations defined .303 −.080 −.093 .293b
Expectations taught .540 .477b −.029 .246b
System for rewarding expectations .457b .566b .045 .223b
System for responding to violations .149 −.004 −.416b .055
Monitoring and decision-makingc .082 .054 −.163 .402
Managementc .343 .231 −.059 .000
Broad supportc .283 .129 −.141 .082
SET Overall .441 .335 −.116 .314
BOQ for OSTa
PBS leadershipc .372 .231 .050 .091
Staff commitmentc .253 .278 .086 .137
Effective procedures for dealing with discipline .090 .191b −.118b .306
Data entry and analysis plan establishedc .007 .361 .066 .494
Expectations and rules developed .421b .264b −.016 .257b
Set the stage .441 .500b .563 .290
Reward/recognition program established .494b .595b −.104 .308b
Lesson plans for teaching expectations/rules,
and routines
.399 .468b −.025 .375b
Implementation plan .364 .285 .110 .148
Setting-specific systems .413b .530b .265 .409b
Evaluationc .415 .443 .058 .375
BOQ Overall .398 .485 .146 .307
Note. OST = Out-of-School Time; BOQ = Benchmark of Quality; PBS = Positive Behavior Support.
aMeasures were adapted for the OST program context with permission from the original authors. bExpected to be modestly correlated to provide
evidence of convergent validity. cExpected to not be correlated to provide evidence of discriminant validity.
222 Assessment for Effective Intervention 43(4)
was not significantly correlated with SET-OST section
monitoring and decision-making (.402) and the BOQ-OST
section data entry and analysis (.494).
Discriminant validity. Correlations between the MASI and
BOQ-OST and SET-OST are presented in Table 5. Overall,
we hypothesized that MSI, Praise, Correction, and Behav-
ior Expectations would not be correlated to SET-OST sec-
tions related to (a) monitoring and decision-making, (b)
management, and (c) broad support, as well as BOQ-OST
sections (a) PBIS leadership, (b) staff commitment, (c) data
entry and analysis plan established, and (d) evaluation. Cor-
relation analyses indicate that SET-OST sections manage-
ment and broad support, as well as BOQ-OST section staff
commitment, did not demonstrate significant correlations
with MSI, Praise, Correction, and Behavior Expectations
(see Table 5). Behavior Expectations was not significantly
correlated with monitoring and decision-making and data
entry and analysis plan established.
Discussion
The use of research-based Tier 1 behavior management
practices such as high rates of praise use of behavior expec-
tations, and low levels of correction, is associated with posi-
tive outcomes for students (Bradshaw et al., 2009;
Newcomer et al., 2009). Unfortunately, education profes-
sionals such as OST professionals and teachers rarely
deliver these strategies consistently and require ongoing
implementation support (Reddy et al., 2013b; Ruberto,
2015). To do so, ongoing assessment of Tier 1 behavior
management implementation is needed and some emerging
tools (Gresham et al., 2017) and strongly supported, class-
room-focused measures are available (Pianta & Hamre,
2009). Initial research has utilized SDO methodology to
feasibly and flexibly evaluate implementer behavior (e.g.,
Simonsen et al., 2013). To this end, we developed the
MASI to measure OST professionals’ Praise and Correction
Statements, References to Behavior Expectations, and
Active Supervision, and conducted observations by multi-
ple raters in seven OST programs. Findings suggest that the
MASI can be completed by two raters with high agreement;
ratings are attributable to desirable sources of variance for
most behaviors, content validators positively rate the mea-
sure constructs and clarity, and results were not signifi-
cantly correlated with components of the SET-OST and
BOQ-OST.
Interobserver reliability analyses suggest that the MASI
data reported here were completed with high levels of
agreement. Intraobserver analyses, conducted using vari-
ance partitioning analyses, suggested that the majority of
variance in ratings for MSI and Praise was attributable to
the professional and/or the interaction between the profes-
sional and the session wherein they were observed. For
Corrections behaviors, a just under half of the variance in
ratings was attributable to the session independent of the
professional and the professional/session interaction. That
is, aspects of the session during which the observation took
place (e.g., Monday afternoon, math day, new room) were
almost as influential on rating variance as the professional
within the session. In other words, the ratings of Correction
may be influenced by the overall session just as much as the
person who is expressing the behavior and the person’s
interaction with the session.
The extremely limited variance in the Behavior
Expectations behavior precluded its use in variance parti-
tioning analyses, and suggests that this variable requires
additional attention in order for Behavior Expectation fre-
quency count data to be used in this measure. Further devel-
opment should focus on examination of the Behavior
Expectation behavior definition, and whether Behavior
Expectation is better characterized as a state behavior (e.g.,
better measured using time sampling) than as an event
behavior (e.g., using frequency counts). Furthermore, the
low frequency and limited variability in the Behavior
Expectation data may have affected the interobserver agree-
ment of the behavior characteristics ratings related to this
construct. Ratings of Expectations Posted and Expectations
Reinforced had only modest levels of agreement per Kappa.
It is possible that revisions to the Behavior Expectation
definition and measurement could have a commensurate
impact on adjusting the agreement of these ratings.
Evidence for the validity of the MASI was collected
through initial content validation and correlations between
the MASI and measures of PBIS implementation adapted
for OST settings. Content validators agreed that the items
were clear, appropriate for the setting, and aligned with spe-
cific behavior management practices and PBIS. After cor-
recting for multiple comparisons, correlation coefficients
between the MASI and the SET-OST and BOQ-OST did
not indicate a significant relationship between the rankings
of results from MSI, Praise, Correction, and Behavior
Expectations and specific factors of the PBIS implementa-
tion measures. In general, these correlations were in the
expected directions providing evidence of convergent and
divergent validity. However, some modest and unexpected
correlations (e.g., MSI with SET-OST Expectations Taught,
Praise with SET-OST Evaluation) may suggest a more gen-
eral relationship between the specific practices on the MASI
(e.g., Praise, Active Supervision) and aspects of PBIS
implementation than initially expected. However, given
that none of the correlations were determined to be signifi-
cant, these results are extremely tentative and potentially no
different from zero. Thus, evidence for the convergent and
discriminant validity of data derived from the MASI when
compared with results from the SET-OST and BOQ-OST is
still absent at this time. Future research may evaluate the
correspondence between MASI scores and other measures
Collier-Meek et al. 223
of behavior management. It would be expected that the cor-
relations between the MASI and staff-level measures would
be higher than the correlations reported here between the
MASI and SET-OST and BOQ-OST data.
Limitations
This initial assessment of the MASI has limitations. The
three raters in this study were enrolled in a research-ori-
ented university. Although the rater training is documented
here and may be replicated by others, findings may not be
generalizable to different types of raters (e.g., OST lead-
ers) and OST contexts. Future research should document
the reliability of data collected via the MASI by other rat-
ers. In addition, the OST professionals evaluated here
were involved in a larger project evaluating training and
support of Positive BOOST implementation. No OST pro-
fessional demographic data were collected. As a result of
this investigation conducted within a large study, the par-
ticipant data may not be representative of wider OST pop-
ulations and, furthermore, the varied phases of the Positive
BOOST project may have influenced OST professional
behavior. Future studies may utilize the MASI to evaluate
OST staff behavior in programs unassociated with Positive
BOOST or other settings that students participate in, such
as school.
Although the limited number of observations utilized
in this study precluded more fine-grained analyses,
future research should consider the role of program
implementation upon the validity and reliability of data
derived from the MASI, as well as consider the use of a
design that would permit the examination of individual
professional-level variance disentangled from session.
Also, the quantitative data produced by the MASI were
the focus of this study, and the suitability and informa-
tion provided by the checklist ratings were not evaluated.
Future research should evaluate the extent to which the
checklist ratings and narrative recording are reliable,
valid, and informative.
Last, the MASI data were compared with ratings of the
SET and BOQ that were adapted for the OST setting.
Although prior analyses have indicated that the original
measures can produce data with adequate psychometrical
properties in school settings (Cohen et al., 2007; Horner
et al., 2004), the OST adaptions used in this study have not
been assessed in this way. Furthermore, the a priori hypoth-
eses for convergent and divergent validity analyses between
the MASI and the SET and BOQ were identified by the
authors alone. Future research could include an expert
panel not otherwise involved in the research to provide
their impressions of the expected relationships and evalu-
ate the relationship between the MASI and other measures
that include items with behavior management practices
(e.g., CLASS, CSS-OF, Reddy et al., 2013a; Pianta &
Hamre, 2009).
Implications for Research
Despite the importance of Tier 1 behavior management
strategies to prevent and address problem behavior (Kern &
Clemens, 2007; Simonsen et al., 2008), there is relatively
limited research on related implementation measures, out-
side of the CLASS (Pianta & Hamre, 2009) and CSS-OF
(Reddy et al., 2013a), comprehensive measures of class-
room instructional and behavioral practices. SDO is typi-
cally applied to evaluate student behavior (Suen & Ary,
1989), but may also have utility in the assessment of teacher
behavior (e.g., Colvin et al., 1997; Simonsen et al., 2013).
As applied here, some evidence suggested that the MASI
was an appropriate measure, particularly related to the
assessment of Praise and Correction Statements as well as
Active Supervision. Additional research is needed to refine
the Behavior Expectation definition and measurement,
which might provide insight about why this behavior was
rated with comparatively less agreement. Further research
may also evaluate the use of the MASI in settings outside of
OST programs, such as classrooms, because the constructs
that the measure assesses are likely relevant to settings out-
side of OST (Newcomer et al., 2009). Research could also
assess how sensitive to change MASI data are and, in doing
so, evaluate the utility of this measure for providing feed-
back to implementers about their behavior. Furthermore,
the findings on the MASI suggest that SDO may be applied
to other adult implementation behaviors, such as prompting
and providing choices, although additional research is
needed. Overall, SDO may be a promising methodology for
future treatment integrity assessment research.
Implications for Practice
These findings suggest some evidence to support the use of
the MASI to evaluate OST staff implementation of behavior
management practices, with the exception of references to
Behavior Expectations. Implementation of research-based
behavior management practices is critical, yet doing so con-
sistently is challenging (e.g., Reddy et al., 2013b); accord-
ingly, it is important to monitor regularly. To address this
need for monitoring, the measure might provide one option
for assessing key Tier 1 strategies and providing targeted
performance feedback or support. That is, the MASI could
be used on a regular basis to facilitate data-driven perfor-
mance feedback for staff and ensure consistent implementa-
tion of these Tier 1 strategies. Whereas it seems likely that
individual staff performance and program-level implemen-
tation are related, the exact character of those relations is
not clear.
224
A
p
p
e
n
d
ix
M
A
SI
–
O
ST
/
O
bs
er
va
ti
o
n
o
f
O
ST
P
ro
fe
ss
io
na
l (
O
ST
P)
O
ST
#
l/C
o
de
:
O
ST
p
ro
gr
am
:
Se
tt
in
g:
O
bs
er
ve
r
1
N
um
be
r
o
f
st
ud
en
ts
p
re
se
nt
:
A
ct
iv
it
y:
O
bs
er
ve
r
2
(
N
A
)
Pl
an
t
o
r
an
do
m
ly
s
el
ec
t
th
re
e
O
ST
Ps
t
o
o
bs
er
ve
a
nd
r
ec
o
rd
o
bs
er
va
ti
o
ns
s
ep
ar
at
el
y.
F
ir
st
, s
el
ec
t
O
ST
Ps
u
si
ng
r
an
do
m
n
um
be
r
ge
ne
ra
to
r.
C
o
m
pl
et
e
ab
o
ve
b
ac
kg
ro
un
d
in
fo
rm
at
io
n.
R
ev
ie
w
be
ha
vi
o
rs
a
nd
d
ef
in
it
io
ns
. T
he
n,
(
a)
c
o
m
pl
et
e
m
o
m
en
ta
ry
t
im
e
sa
m
pl
in
g
o
f
M
SI
in
1
5
s
in
te
rv
al
s,
a
nd
(
b)
t
ak
e
a
fr
eq
ue
nc
y
co
un
t
o
f
re
in
fo
rc
em
en
t,
c
o
rr
ec
ti
o
n,
a
nd
b
eh
av
io
r
ex
pe
ct
at
io
ns
f
o
r
10
co
nt
in
uo
us
m
in
ut
es
. I
m
m
ed
ia
te
ly
f
o
llo
w
in
g
th
e
ad
m
in
is
tr
at
io
n,
r
ev
ie
w
b
eh
av
io
r
ch
ar
ac
te
ri
st
ic
s
(in
it
al
ic
s)
a
nd
r
ec
o
rd
if
t
he
y
w
er
e
pr
es
en
t
du
ri
ng
t
he
1
0
m
in
. W
ri
te
a
ny
c
la
ri
fy
in
g
na
rr
at
iv
e
n
o
te
s.
Su
m
m
ar
iz
e
o
bs
er
va
ti
o
ns
o
n
pa
ge
5
.
SY
ST
EM
A
T
IC
D
IR
EC
T
O
B
SE
R
V
A
T
IO
N
S
St
ar
t
T
im
e:
M
o
ve
, S
ca
n,
I
nt
er
ac
t
(M
SI
):
O
ST
P
ac
ti
ve
ly
m
o
vi
ng
t
hr
o
ug
ho
ut
t
he
s
pa
ce
, s
ca
nn
in
g
st
ud
en
t
be
ha
vi
o
r,
o
r
in
te
ra
ct
in
g
w
it
h
st
ud
en
t(
s)
.
In
te
rv
al
0:
15
0:
30
0:
45
1:
00
1:
15
1:
30
1:
45
2:
00
2:
15
2:
30
2:
45
3:
00
3:
15
3:
30
3:
45
4:
00
4:
15
4:
30
4
:4
5
5:
00
M
SI
In
te
rv
al
5:
15
5:
30
5:
45
6:
00
6:
15
6:
30
6:
45
7:
00
7:
15
7:
30
7:
45
8:
00
8:
15
8:
30
8:
45
9:
00
9:
15
9:
30
9
:4
5
10
:0
0
M
SI
FR
EQ
U
EN
C
Y
O
B
SE
R
V
A
T
IO
N
S
S
ta
rt
T
im
e:
R
ei
nf
o
rc
em
en
t
(R
ei
nf
o
rc
e/
B
e
po
si
ti
ve
):
O
ST
P
pr
ai
se
s
o
r
ac
kn
o
w
le
dg
es
s
tu
de
nt
(s
)
fo
r
de
si
re
d
be
ha
vi
o
rs
.
C
o
rr
ec
ti
o
n:
O
ST
P
re
pr
im
an
ds
, c
o
rr
ec
ts
s
tu
de
nt
(s
)
w
he
n
un
de
si
re
d
be
ha
vi
o
r
is
e
xh
ib
it
ed
.
B
eh
av
io
r
Ex
pe
ct
at
io
ns
(
B
E)
: O
ST
P
re
fe
re
nc
es
b
eh
av
io
r
ex
pe
ct
at
io
ns
w
he
n
en
ga
gi
ng
w
it
h
st
ud
en
t(
s)
.
Frequency 10 min
Frequency 10 min
Frequency 10 min
Sp
ec
ifi
c:
id
en
tif
ie
s
sk
ill
/b
eh
av
io
r
st
ud
en
t
ex
hi
bi
te
d
Sp
ec
ifi
c:
id
en
tif
ie
s
sk
ill
/b
eh
av
io
r
st
ud
en
t
ex
hi
bi
te
d
BE
p
os
te
d
in
a
re
a
of
a
ct
iv
ity
(
if
in
do
or
s)
Im
m
ed
ia
te
: p
ro
vi
de
d
as
ap
f
ol
lo
w
in
g
de
si
re
d
be
ha
vi
or
Im
m
ed
ia
te
: p
ro
vi
de
d
as
ap
f
ol
lo
w
in
g
un
de
si
re
d
be
ha
vi
or
BE
a
dh
er
en
ce
r
ei
nf
or
ce
d:
s
tu
de
nt
s
pr
ai
se
d
fo
r
ad
he
re
nc
e
Ap
pr
op
ria
te
: t
o
st
ud
en
t,
se
tt
in
g,
b
eh
av
io
r
ex
hi
bi
te
d
R
ed
i
re
ct
io
n:
A
cc
om
pa
ni
ed
b
y
re
di
re
ct
io
n
N
ui
sa
nc
e
Be
ha
vi
or
s:
U
nd
es
ir
ed
b
eh
av
io
rs
, m
ild
d
is
ru
pt
io
n,
n
o
t
da
ng
er
o
us
,
no
t
es
ca
la
ti
ng
, l
im
it
ed
im
pa
ct
. N
o
te
: n
o
f
re
qu
en
cy
d
at
a
co
lle
ct
ed
f
o
r
nu
is
an
ce
b
eh
av
io
r.
D
el
iv
er
ed
a
cr
os
s
m
an
y
st
ud
en
ts
in
p
ro
gr
am
Br
ie
f
du
ra
tio
n:
c
or
re
ct
io
n
is
le
ss
t
ha
n
30
s
R
ef
er
s
to
b
eh
av
io
r
ex
pe
ct
at
io
ns
a
nd
/o
r
ro
ut
in
es
Pr
ai
se
f
ol
lo
w
s
sh
ift
t
o
de
si
re
d
be
ha
vi
or
R
ef
er
s
to
b
eh
av
io
r
ex
pe
ct
at
io
ns
a
nd
/o
r
ro
ut
in
es
N
A
Ig
no
re
d:
n
o
at
te
nt
io
n
gi
ve
n
to
n
on
de
si
re
d
be
ha
vi
or
s
N
ar
ra
ti
ve
n
o
te
s:
N
A
Pr
ai
se
is
d
el
iv
er
ed
t
o
st
ud
en
ts
e
ng
ag
ed
in
ap
pr
op
ria
te
b
eh
av
io
rs
N
A
Pr
ai
se
im
m
ed
ia
te
ly
f
ol
lo
w
s
sh
ift
f
ro
m
n
on
–
de
si
re
d
to
d
es
ire
d
be
ha
vi
or
N
A
D
iff
er
en
t
re
sp
on
se
s
to
n
ui
sa
nc
e
an
d
pr
ob
le
m
be
ha
vi
or
N
ot
e.
M
A
SI
=
M
ea
su
re
o
f
Ac
tiv
e
Su
pe
rv
is
io
n
an
d
In
te
ra
ct
io
n;
O
ST
=
o
ut
-o
f-
sc
ho
o
l t
im
e.
Collier-Meek et al. 225
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect
to the research, authorship, and/or publication of this article.
Funding
The authors disclosed receipt of the following financial support
for the research, authorship, and/or publication of this article: The
authors gratefully acknowledge the support of the Connecticut
State Department of Education (CSDE), in particular, Shelby
Pons, LMSW in the Office of Health/Nutrition, Family Services
and Adult Education; Betsy Leborious, Kaitlyn O’Leary, Kimberly
Brewer, and Gerald Barrett at the Capital Region Education
Council (CREC); and the UConn Center for Applied Research in
Human Development. The opinions expressed are those of the
authors and do not represent views of the CSDE or CREC.
References
Bradshaw, C. P., Koth, C. W., Thornton, L. A., & Leaf, P. J.
(2009). Altering school climate through school-wide posi-
tive behavioral interventions and supports: Findings from a
group-randomized effectiveness trial. Prevention Science, 10,
100–115.
Briesch, A. M., Swaminathan, H., Welsh, M., & Chafouleas, S.
M. (2014). Generalizability theory: A practical guide to study
design, implementation, and interpretation. Journal of School
Psychology, 52, 13–35.
Cohen, J. (1960). A coefficient of agreement for nominal scales.
Educational and Psychological Measurement, 20, 37–46.
Cohen, R., Kincaid, D., & Childs, K. E. (2007). Measuring school-
wide positive behavior support implementation: Development
and validation of the Benchmarks of Quality. Journal of
Positive Behavior Interventions, 9, 203–213.
Collier-Meek, M. A., Fallon, L. M., & Gould, K. (accepted). How
are treatment integrity data assessed? A systematic review
of performance feedback literature. School Psychology
Quarterly.
Collier-Meek, M. A., Fallon, L. M., Sanetti, L. M. H., & Maggin,
D. M. (2013). Focus on implementation: Strategies for prob-
lem-solving teams to assess and promote treatment fidelity.
Teaching Exceptional Children, 45, 52–59.
Collier-Meek, M. A., Sanetti, L. M. H., & Boyle, A. M. (2016).
Providing feasible implementation support: Direct train-
ing and implementation planning in consultation. School
Psychology Forum, 10, 106–119.
Colvin, G., Sugai, G., Good, I. I. I. R. H., & Lee, Y. Y. (1997).
Using active supervision and precorrection to improve transi-
tion behaviors in an elementary school. School Psychology
Quarterly, 12, 344–363.
Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied
behavior analysis (2nd ed.). Upper Saddle River, NJ: Prentice
Hall.
Epstein, M., Atkins, M., Cullinan, D., Kutash, K., & Weaver,
R. (2008). Reducing behavior problems in the elementary
school classroom: A practice guide (NCEE No. 2008-012).
Washington, DC: National Center for Education Evaluation
and Regional Assistance, Institute of Education Sciences,
U.S. Department of Education. Retrieved from https://ies.
ed.gov/ncee/wwc/PracticeGuide/4
Farrell, A. F., & Collier-Meek, M. A. (2014). Positive BOOST
(Positive Behavior in Out of School Time): A guide to develop-
ment and implementation of Positive Behavior Interventions
and Supports. Storrs: Center for Applied Research, University
of Connecticut.
Ferguson, T. D., Briesch, A. M., Volpe, R. J., & Daniels, B.
(2012). The influence of observation length on the depend-
ability of data. School Psychology Quarterly, 27, 187–197.
Gresham, F. M. (2014). Measuring and analyzing treatment integ-
rity data in research. In L. M. H. Sanetti & T. R. Kratochwill
(Eds.), Treatment integrity: Conceptual, methodological,
and applied considerations for practitioners (pp. 109–130).
Washington, DC: American Psychological Association.
Gresham, F. M., Dart, E. H., & Collins, T. A. (2017).
Generalizability of multiple measures of treatment integrity:
Comparisons among direct observation, permanent products,
and self-report. School Psychology Review, 46, 108–121.
Hintze, J. M. (2005). Psychometrics of direct observation. School
Psychology Review, 34, 507–519.
Horner, R. H., Todd, A. W., Lewis-Palmer, T., Irvin, L. K., Sugai,
G., & Boland, J. B. (2004). The school-wide evaluation tool:
A research instrument for assessing school-wide positive
behavior support. Journal of Positive Behavior Interventions,
6, 3–12. doi:10.1177/10983007040060010201
Johnson, A. H., Chafouleas, S. M., & Briesch, A. M. (2017).
Dependability of data derived from time sampling meth-
ods with multiple observation targets. School Psychology
Quarterly, 32, 22–34. doi:10.1037/spq0000159
Kern, L., & Clemens, N. H. (2007). Antecedent strategies to
promote appropriate classroom behavior. Psychology in the
Schools, 44, 65–75.
Kincaid, D., Childs, K., & George, H. (2010). School-wide
Benchmarks of Quality (Revised). Tampa: University of
South Florida.
Leedy, A., Bates, P., & Safran, S. P. (2004). Bridging the research-
to-practice gap: Improving hallway behavior using positive
behavior supports. Behavioral Disorders, 29, 130–139.
Lewis, T. J., Colvin, G., & Sugai, G. (2000). The effects of pre-
correction and active supervision on the recess behavior of
elementary students. Education and Treatment of Children,
23, 109–121.
McCoach, D. B., Gable, R. K., & Madura, J. (2013). Instrument
design in the affective domain (3rd ed.). New York, NY:
Springer.
Newcomer, L., Colvin, G., & Lewis, T. J. (2009). Behavior sup-
ports in nonclassroom settings. In W. Sailor, G. Dunlap, G.
Sugai & R. Horner (Eds.), Handbook of positive behavior
support (pp. 497–520). New York, NY: Springer.
Partin, T. C. M., Robertson, R. E., Maggin, D. M., Oliver, R. M.,
& Wehby, J. H. (2009). Using teacher praise and opportu-
nities to respond to promote appropriate student behavior.
Preventing School Failure, 54, 172–178.
Pianta, R. C., & Hamre, B. K. (2009). Conceptualization, measure-
ment, and improvement of classroom processes: Standardized
observation can leverage capacity. Educational Researcher,
38, 109–111.
https://ies.ed.gov/ncee/wwc/PracticeGuide/4
https://ies.ed.gov/ncee/wwc/PracticeGuide/4
226 Assessment for Effective Intervention 43(4)
Pianta, R. C., La Paro, K. M., & Hamre, B. K. (2008). Classroom
Assessment Scoring System (CLASS) manual, Pre-K. Baltimore,
MD: Brookes.
Reddy, L. A., Fabiano, G. A., Dudek, C. M., & Hsu, L. (2013a).
Development and construct validity of the Classroom Strategies
Scale-Observer Form. School Psychology Quarterly, 28,317–341.
Reddy, L. A., Fabiano, G. A., Dudek, C. M., & Hsu, L. (2013b).
Instructional and behavior management practices imple-
mented by elementary general education teachers. Journal of
School Psychology, 51, 683–700.
Reinke, W. M., Herman, K. C., & Sprick, R. (2011). Motivational
interviewing for effective classroom management: The class-
room check-up. New York, NY: Guilford Press.
Reinke, W. M., Lewis-Palmer, T., & Merrell, K. (2008). The
classroom check-up: A classwide consultation model for
increasing praise and decreasing disruptive behavior. School
Psychology Review, 37, 315–332.
Ruberto, L. (2015). Embedding elements of Positive Behavioral
Interventions and Supports (PBIS) in a summer program
(Doctoral dissertations, 998). Retrieved from http://digitalc-
ommons.uconn.edu/dissertations/998
Sanetti, L. M. H., & Collier-Meek, M. A. (2014). Increasing the rigor
of treatment integrity assessment: A comparison of direct obser-
vation and permanent product methods. Journal of Behavioral
Education, 23, 60–88. doi:10.1007/s10864-013-9179-z
Sanetti, L. M. H., & Kratochwill, T. R. (2009). Towards develop-
ing a science of treatment integrity: Introduction to the special
series. School Psychology Review, 38, 445–459.
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002).
Experimental and quasi-experimental designs for generalized
causal inference. Boston, MA: Houghton Mifflin.
Shapiro, E. S. (2011). Academic skills problems: Direct assess-
ment and intervention (4th ed.). New York, NY: Guilford
Press.
Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses
in assessing rater reliability. Psychological Bulletin, 86, 420–
428. doi:10.1037/0033-2909.86.2.420
Simonsen, B., Fairbank, S., Briesch, A., Myers, D., & Sugai,
G. (2008). Evidence-based practices in classroom manage-
ment: Considerations for research to practice. Education and
Treatment of Children, 31, 351–380.
Simonsen, B., Fairbank, S., Briesch, A., & Sugai, G. (2006).
Classroom management: Self-assessment revised. Center on
Positive Behavior Interventions and Supports, University of
Connecticut. Available from http://www.pbis.org
Simonsen, B., MacSuga, A. S., Fallon, L. M., & Sugai, G. (2013).
The effects of self-monitoring on teachers’ use of specific
praise. Journal of Positive Behavior Interventions, 15, 5–15.
doi:10.1177/1098300712440453
Suen, H. K., & Ary, D. (1989). Analyzing quantitative behavioral
observation data. Hillsdale, NJ: Lawrence Erlbaum.
Sugai, G., Lewis-Palmer, T., Todd, A., & Horner, R. H.
(2001). School-wide evaluation tool. Eugene: University
of Oregon.
Sutherland, K. S., Wehby, J. H., & Copeland, S. R. (2000). Effect
of varying rates of behavior-specific praise on the on-task
behavior of students with EBD. Journal of Emotional and
Behavioral Disorders, 8, 2–8.
Wickstrom, K. F., Jones, K. M., LaFleur, L. H., & Witt, J. C.
(1998). An analysis of treatment integrity in school-based
behavioral consultation. School Psychology Quarterly, 13,
141–154. doi:10.1037/h0088978
http://digitalcommons.uconn.edu/dissertations/998
http://digitalcommons.uconn.edu/dissertations/998
http://www.pbis.org
2005 – Using data to support learning Conference Archive
2005
An evidence-based approach to teaching and
learning
Michele Bruniges
ACT Department of Education and Training
Follow this and additional works at: http://research.acer.edu.au/research_conference_2005
Part of the Educational Assessment, Evaluation, and Research Commons
This Conference Paper is brought to you by the Conference Archive at ACEReSearch. It has been accepted for inclusion in 2005 – Using data to support
learning by an authorized administrator of ACEReSearch. For more information, please contact repository@acer.edu.au.
Recommended Citation
Bruniges, Michele, “
” (2005).
http://research.acer.edu.au/research_conference_2005/15
http://research.acer.edu.au?utm_source=research.acer.edu.au%2Fresearch_conference_2005%2F15&utm_medium=PDF&utm_campaign=PDFCoverPages
http://research.acer.edu.au/research_conference_2005?utm_source=research.acer.edu.au%2Fresearch_conference_2005%2F15&utm_medium=PDF&utm_campaign=PDFCoverPages
http://research.acer.edu.au/conference_archive?utm_source=research.acer.edu.au%2Fresearch_conference_2005%2F15&utm_medium=PDF&utm_campaign=PDFCoverPages
http://research.acer.edu.au/research_conference_2005?utm_source=research.acer.edu.au%2Fresearch_conference_2005%2F15&utm_medium=PDF&utm_campaign=PDFCoverPages
http://network.bepress.com/hgg/discipline/796?utm_source=research.acer.edu.au%2Fresearch_conference_2005%2F15&utm_medium=PDF&utm_campaign=PDFCoverPages
Michele Bruniges
Department of Education and Training,
Australian Capital Territory
Michele Bruniges (Dip T., Grad Dip Ed. Studies.,
M.Ed. Ph.D.) has experience teaching in both
primary and secondary schools. She has also held
the positions of Senior Curriculum Adviser,
Assessment and Reporting, Chief Education
Officer, Mathematics and Assistant Director of
School Assessment and Reporting for the NSW
Department of Education and Training.
During 1999, Michele received an award for
excellent service to public education and training
in NSW. The following year, Michele was
appointed Director of Strategic Information and
Planning with responsibility for leading and
directing systems performance, information
systems and corporate and strategic planning. In
the same year, she was awarded a Churchill
Fellowship to study the analysis, monitoring and
reporting of student achievement in education
systems and research studies in the United States
and the Netherlands.
Michele was appointed Assistant Director-
General, School Education Services NSW in 2003
with a strong interest in educational measurement
issues, school culture and the process of managing
change. In early 2004, Michele was appointed
Regional Director, Western Sydney with priorities
including a renewed focus on supporting frontline
teachers and school staff and the provision of
quality responses to local issues. In January 2005,
Michele took up the position of Chief Executive
of the ACT Department of Education and
Training.
A Greek philosopher might suggest that
evidence is what is observed, rational
and logical; a Fundamentalist – what you
know is true; a Post Modernist – what
you experience; a Lawyer – material
which tends to prove or disprove the
existence of a fact and that is admissible
in court; a Clinical Scientist –
information obtained from observations
and/or experiments; and a teacher –
what they see and hear.
The past decade has seen a high level
of engagement and commitment by
Australian schools to the collection,
analysis and interpretation of
information about students to inform
teaching and learning. Rapid changes in
society, economics and technology, the
increased demand for accountability,
and the need to prepare all students to
be citizens in an increasingly globalised
world, has cultivated the increased
requirement to inform and improve
education through various evidence-
based approaches.
However, while evidence is one way to
support the core business of schools
–maximising student learning and
outcomes – evidence in and of itself is
not sufficient to maximise student
outcomes. If we are serious about
developing and maintaining an evidence-
based culture of improvement in
teaching and learning, the unique and
specialised knowledge, skills, experience
and professional capacity of teachers
must be valued as fundamental
components of any evidence process.
That is, the way in which evidence is
obtained, collated, interpreted and results
strategically utilised, must be interlinked
with, and influenced by, the profession.
What is evidence?
Evidence is obtained through various
forms of assessment – which may
include teacher observation, tests, peer
assessment and practical performance –
and constitutes the information and
data that is used to gauge the
educational attainment and progress of
individuals; groups; and cohorts; and
increasingly, the effectiveness of
programs and performance of
educational systems.
Information and assessment data are
increasingly used for multiple purposes,
including national and international
comparisons of standards of learning
and educational attainment (Timmins,
2004). Increased pressures at a local
level to meet accountability
requirements, and to deliver improved
results across the cohort have ‘put data
to an increasing array of use’ (Timmins,
2004, p. 2) in schools.
Why is an evidenced-
based approach to
teaching and learning
important?
As realised by many educationalists, an
evidence-based approach to teaching
and learning is crucial to maximising
student outcomes. We need to ‘know’ –
to have evidence about the
performance of our students in order
to support them to achieve high quality
educational outcomes.
There are four major ways in which we
can use the information we gain from
assessment (our evidence) to maximise
student learning and outcomes. These
include using evidence to:
• improve the focus of our teaching
(a diagnostic capacity)
• focus students’ attention on their
strengths and weaknesses (a
motivation capacity)
• improve programming and planning
(a means of program assessment)
An evidence-based approach to teaching
and learning
Research Conference 2005
102
Using Data to Support Learning
103
• report on an assessment (a means
of communicating student
achievement)
In order to most effectively support
students to achieve quality educational
outcomes, the process of evidence to
inform teaching and learning must be
an explicit and accountable one, which
is equitable, representative, valid, and
reliable.
Sharing the secret
The increased use of information and
assessment data to inform teaching and
learning brings a largely recognised
increased need for assessment that is
an open and accountable process about
what really matters, what students
should know, and a process that
provides the best information to them
on how they can improve.
Assessment should not be a covert
mission, but rather a process defined by
the importance of transparency and
information sharing which is directed by
positioning the needs of students as
paramount. Providing students with
minimal and nondescript information
about assessment is an antiquated
approach, which has the potential to
disengage students from an important
aspect of their learning experience and
limit their capacity for achievement. Being
open with students about the once held
secrets of assessment, and engaging
students in associated questioning and
conversation, provides a greater
opportunity for all students to achieve
high quality educational outcomes.
The development of assessment that
makes explicit the standards, criteria
and feedback for students has been
recognised as a significant development
in describing and quantifying student
achievement and progress. The
adoption of criterion-referenced
reporting (in favour of, or in
collaboration with, the more traditional
norm-referenced assessment) by
Australian education systems as the
primary means to describe students’
achievements and progress has enabled
the use of data to identify particular
strengths or weaknesses in curriculum
terms at the classroom, school and
system levels. One example of this has
been the development of assessment
rubrics. Rubrics have been powerful in
supporting student learning in their
simplistic form by providing a list of
criteria, or ‘what counts’ in a project or
assignment; and in providing a scale
describing the characteristics of a range
of student work. This tool creates the
structure for important conversations
about assessment by providing students
with informative feedback about their
work and detailed evaluations of final
products (Department of Education
Tasmania).
Criterion-referenced assessment sheds
light on many of the previously
protected secrets of assessment. In the
past, the details of assessment have
usually remained teacher-only
information. However, increasingly so,
teachers and students are engaging in
conversations about assessment that
involves a common language. These
conversations are crucial to provide the
learner with an opportunity and
impetus to discuss how goals are set,
how performance is measured, and
how performance can be improved.
Significantly, they enable the learner to
experience an active role in the
assessment process. They also provide
important feedback for teachers that
can be used to respond to students’
particular needs.
Advances in educational measurement
have paved the way for the
introduction of progress maps or
achievement scales that articulate a
continuum of typical development in a
specified domain. Once defined, these
maps can be used to describe quality
student achievement at both a point in
time and over time. This development
has also provided the means to
establish where individual students are
in a continuum of learning the essential
starting point from which to develop a
relevant and appropriate learning
pathway.
Quality teachers make
the difference
We know that quality teachers make a
significant difference to the learning
outcomes of students. John Hattie’s
(2003) recent rigorous and exhaustive
research has provided profound and
powerful evidence to support this
conviction – ‘excellence in teaching is
the single most powerful influence on
achievement’. The design, collection and
response to findings are intimately
linked to the art of effective teaching
and will impact significantly on student
educational achievement.
In many disciplines, field professionals
are predominantly identified as having
the most astute and profound
knowledge, skills, experience and
professional capacity to make
judgements about the most effective
way to obtain, collate, interpret and
apply evidence. Professional educators
have a unique and specialised capacity
to lead and contribute to evidence-
based approaches to teaching and
learning – because, it is they who know
best, the ‘subject’ matter and the
individual. Teachers are distinguished
from other professions by their deep
knowledge of how the learning process
occurs. This places teachers in an
inimitable position to utilise a range of
profession-specific, as well as locally
specific, skills, knowledge and
experiences, to improve the educational
outcomes of their students.
While it is necessary to value, or at
least consider, all sources of evidence,
we must not hesitate to recognise that
teachers are often in a leading position
to identify and act on the best way in
which to obtain and assess the
worthiness and weight of the diverse
range of evidence collected about
students. Just as the judgement and
authority of a doctor is respected in the
assessment he/she makes of a patient,
and the medication he/she prescribes
to achieve an outcome of health and
well-being, so too should the
professional expertise of teachers be
valued and trusted, in the quest for
high-quality educational results.
Teachers are in a unique position to
have an extensive and well-developed
range of strategies and techniques that
can be used to identify and meet the
current needs of a diverse range of
students – and, moreover, to match the
future desired achievements of the
students to a plan for action. No,
teachers cannot necessarily predict the
future! However, they do have a rich
capacity to accumulate a broad-ranging
repertoire of strategies that enable them
to match a strategy to a student’s needs.
With this knowledge base, teachers are
able to make informed judgements
about how best to work towards
further developing students, selecting
assessment strategies that accurately
reflect what it is that our students know;
use evidence to support students for
further achievement; and prepare
students to be active and contributing
citizens, now and into the future.
Furthermore, teachers are in a
distinctive position to be able to
interrogate evidence. The value of
evidence does not necessarily lie solely
in the description that it provides of
student achievement – but rather, the
way in which this description is
interrogated and understood in order
to develop and apply appropriate
strategies to improve student learning. It
is fair to say that traditionally the role of
the teacher in this process has been
undervalued. However, if evidence is to
be used most effectively, the capacity of
the teacher to ask the right questions
of evidence, to examine the how and
why of evidentiary results, and to
respond with the most effective
strategies, must be realised as
paramount.
While it is critical to realise and support
the role of teachers in leading and
contributing to evidence-based
approaches to teaching and learning, it
is also important to consider that
teachers have a responsibility to the
profession, as well as a broader social
responsibility, to account for decisions
that are made. In times of increased
change, it is necessary that the teaching
profession builds strong links with
research communities in order to
understand the most current
developments about learning and
development to enhance and sharpen
their knowledge. For, if we are to
support the notion that the creativity,
ingenuity and expertise of teachers be
valued and prioritised, the thinking and
instruction of teachers must be
relevant, perceptive, dynamic and
forward looking.
Alan Luke (1999) argues that effective
education requires alignment of the
three key message systems that exist in
education: curriculum, pedagogy and
assessment. Luke’s argument is a
powerful one, and teachers, enabled by
professional autonomy and
collaboration, are in a powerful position
to direct and sustain this alignment, in
order to provide effective education.
In identifying the variables that impact
on student learning, Hattie (2003)
confirms that within schools, teachers
account for about 30% of the variance
in student achievements – the major
source of ‘within-school’ variance. There
is also a ‘growing body of evidence that
the use of high-quality, targeted
assessment data, in the hands of school
staff trained to use it effectively, can
improve instruction’ (Protheroe, 2003)
and consequently, student outcomes.
Furthermore, Nancy Protheroe suggests
that educators who have learned to
effectively use assessment data have
often ignited change and achieved
positive results. This evidence provides a
compelling argument of the importance
of continuing development of the
teaching profession, and that in
particular, teachers are supported to
play a leading role in evidence-based
approaches to teaching and learning.
This includes supporting teachers to
see and learn from each other’s work
and experiences, in order to expand
the circle of professional collaboration
directed towards student achievement,
and developing ways to ensure that the
best teachers are retained in the area
of greatest impact – the classroom.
Conclusion
It is the ‘evidence’ that we are
presented with that often informs
decisions that are made about student
learning, and about the health of
education. However, evidence alone is
not sufficient to maximise student
outcomes. Quality teachers are a
fundamental part of the recipe for
successful evidence-based approaches
to teaching and learning. The
knowledge, skills, experience and
Research Conference 2005
104
Using Data to Support Learning
105
professional capacity of teachers must
be valued as essential ingredients in
meeting the goals of the core business
of education systems and ensuring that
educational attainment across the
nation continues to rise.
References
Hattie, John (2003). Teachers make a
difference: What is the research
evidence? Paper given at the Australian
Council for Education Research
Annual Conference on: Building
Teacher Quality.
Luke, Allan (1999): Education 2010 and
new times: Why equity and social justice
still matter, but differently, Paper
presented to the Education
Queensland Online Conference, 20
October 1999. Available online at:
http://education.qld.gov.au/corporate/n
ewbasics/docs/onlineal
Timmins, Robyn. (2004). Putting the
nation to the test, is there room for
improvement? Paper presented at the
9th Annual Assessment Roundtable:
Assessing Assessment Conference,
Sydney, New South Wales, 7-9
November 2004 P.1
Protheroe, Nancy (2001). Improving
teaching and learning with data-based
decisions: Asking the right questions
and acting on the answers, Educational
Research Service: Making a Difference
in Our Children’s Future, Available online
at: http://www.ers.org/spectrum/
sum01a.htm, p.1.
Department of Education Tasmania,
(2005), Available online at:
http://www.education.tas.gov.au/ocll/cu
rrcons/profreadings/andrade.htm
- Australian Council for Educational Research
- untitled
ACEReSearch
2005
An evidence-based approach to teaching and learning
Michele Bruniges
Recommended Citation
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=vpsf20
Preventing School Failure: Alternative Education for
Children and Youth
ISSN: 1045-988X (Print) 1940-4387 (Online) Journal homepage: https://www.tandfonline.com/loi/vpsf20
Active Supervision, Precorrection, and Explicit
Timing: A High School Case Study on Classroom
Behavior
Todd Haydon & Stephen D. Kroeger
To cite this article: Todd Haydon & Stephen D. Kroeger (2016) Active Supervision, Precorrection,
and Explicit Timing: A High School Case Study on Classroom Behavior, Preventing School Failure:
Alternative Education for Children and Youth, 60:1, 70-78, DOI: 10.1080/1045988X.2014.977213
To link to this article: https://doi.org/10.1080/1045988X.2014.977213
Published online: 21 Apr 2015.
Submit your article to this journal
Article views: 1425
View Crossmark data
Citing articles: 7 View citing articles
https://www.tandfonline.com/action/journalInformation?journalCode=vpsf20
https://www.tandfonline.com/loi/vpsf20
https://www.tandfonline.com/action/showCitFormats?doi=10.1080/1045988X.2014.977213
https://doi.org/10.1080/1045988X.2014.977213
https://www.tandfonline.com/action/authorSubmission?journalCode=vpsf20&show=instructions
https://www.tandfonline.com/action/authorSubmission?journalCode=vpsf20&show=instructions
http://crossmark.crossref.org/dialog/?doi=10.1080/1045988X.2014.977213&domain=pdf&date_stamp=2015-04-21
http://crossmark.crossref.org/dialog/?doi=10.1080/1045988X.2014.977213&domain=pdf&date_stamp=2015-04-21
https://www.tandfonline.com/doi/citedby/10.1080/1045988X.2014.977213#tabModule
https://www.tandfonline.com/doi/citedby/10.1080/1045988X.2014.977213#tabModule
Active Supervision, Precorrection, and Explicit Timing:
A High School Case Study on Classroom Behavior
TODD HAYDON and STEPHEN D. KROEGER
University of Cincinnati, Cincinnati, OH, USA
One proactive approach to increasing student engagement in schools is implementing Positive Behavior Intervention and Support
(PBIS) strategies. PBIS focuses on prevention and concentrates on quality-of-life issues that include improved academic
achievement, enhanced social competence, and safe learning and teaching environments. This study is a replication of a study that
investigated the combination of active supervision, precorrection, and explicit timing. The purpose of the study was to decrease
student problem behavior, reduce transition time, and support maintenance of the intervention in the setting. Results show that
active supervision, precorrection, and explicit timing decreased student problem behavior, decreased the duration of transitions in
two instructional periods, and the intervention was maintained in the setting. Implications, limitations, and future research are
discussed.
Keywords: active supervision, explicit timing, Positive Behavior Intervention and Support, precorrection, urban education
Positive classroom management practices with a primary
emphasis on forms of positive reinforcement have been dis-
cussed in the literature (Ahearn, 2010; Beaman, & Wheldall,
2000; Van Houten, Nau, MacKenzie-Keating, Sameoto, &
Colavecchia, 1982). Furthermore, proactive classroom man-
agement strategies such as active supervision, precorrection,
and explicit timing have been linked to positive student out-
comes, including increased student academic engagement
and decreased disruptive behavior, and transition time
(Bohanon, 2008; De Pry, & Sugai, 2002; Franzen, & Kamps,
2008; Haydon, DeGreg, Maheady, & Hunter, 2012; Kazdin
& Klock, 1973; Warren et al., 2003). Systematic classwide
interventions that are efficient and comprehensive allow
teachers to attend to the needs of the entire classroom while
preventing further behavioral problems from occurring.
A strong evidence base has shown that teacher reprimands
increase disruptive behaviors (Beaman, & Wheldall, 2000;
Madsen, Becker, Thomas, Koser, & Plager, 1968; Stormont,
Smith, & Lewis, 2007; Thomas, Becker, & Armstrong, 1968;
Van Houten et al., 1982). One solution to reduce negative
responses such as reprimands is for teachers to implement
positive practices for managing unwanted classroom behav-
ior (Sidman, 2001). The combination of active supervision,
precorrection, and explicit timing is one such positive
practice.
The present study is a demonstration of the effective use of
active supervision, precorrection, and explicit timing and
contributes to the existing knowledge base in several ways.
The results of the study demonstrate that teachers can be
trained to learn the intervention in a short amount of time
(i.e., 30 min) and implement the intervention package with a
high degree of treatment adherence. In addition, the results
of the study provide more evidence of the successful imple-
mentation of the intervention by using a novel classroom set-
ting (a ninth-grade co-taught classroom), a different school
setting (urban setting) and using a new content domain
(English and social studies).
Active Supervision
De Pry and Sugai (2002) defined a flexible four-step process
of active supervision, including (a) moving among students
with a special focus on problem areas, (b) scanning the envi-
ronment to look for both appropriate and inappropriate
behavior, (c) interacting with a variety of students (e.g., hav-
ing conversations, providing precorrections, teaching appro-
priate behaviors), and (d) providing frequent positive
comments for observed appropriate behaviors. Johnson-
Gros, Lyons, and Griffin (2008) included additional compo-
nents of active supervision such as (a) arriving at the class-
room on time, (b) remaining in the setting throughout the
entire transition period, (c) moving toward groups of congre-
gating students in the classroom or hallway, and (d) physi-
cally escorting students throughout the entire transition.
Closely related to active supervision is the use of a precorrec-
tion procedure to support positive behaviors.
Address correspondence to Todd Haydon, College of Education,
Criminal Justice, and Human Services, University of Cincinnati,
ML 0022 Teachers College, Cincinnati, OH 45221, USA.
E-mail: todd.haydon@uc.edu
Preventing School Failure, 60(1), 70–78, 2016
Copyright © Taylor & Francis Group, LLC
ISSN: 1045-988X print / 1940-4387 online
DOI: 10.1080/1045988X.2014.977213
Precorrection
The use of precorrection procedures provides the kind of
prompting needed to move effectively from one activity to
another within a classroom, or from one place to another
(i.e., classrooms to cafeterias, entering or leaving a school
building). Johnson-Gros, Lyons, and Griffin (2008) defined
precorrection as an antecedent intervention that reduces pre-
dictable problem behaviors and increases appropriate
replacement behaviors through the daily review and
reminders of specific rules before being released into that set-
ting (Colvin, Sugai, Good, & Lee, 1997). The objective of
precorrection is to cue the student to engage in a more appro-
priate behavior before the problem behavior ever occurs
(Johnson-Gros et al., 2008; Lewis, Colvin, & Sugai, 2000). In
addition to precorrection, monitoring the time needed to
transition through the use of establishing a time limit (i.e.,
explicit timing) can be a supportive procedure.
Explicit Timing
A good proportion of instructional time can be lost when the
amount of transition time in a classroom is not carefully
monitored (Haydon et al., 2012). The use of an explicit tim-
ing procedure may provide the kind of monitoring needed to
move from one activity to another or from one place to
another. For example, Campbell and Skinner (2004) investi-
gated a sixth-grade teacher’s implementation of an explicit
timing procedure to reduce transition time between classes in
a rural public school. A digital stopwatch was used to mea-
sure transition times, and a chart was drawn to record the
date and spaces to record the number of seconds taken in a
given transition. The procedure included informing the stu-
dents that it was time to perform the given transition activity
such as lining up or waiting for students to be quiet and
seated at their desks. Explicit timing procedures were taught
to the class and practiced. Daily public posting of the amount
of transition time was recorded on a chart. Immediately after
the implementation of the intervention, students showed a
substantial decline in the average amount of time taken for
transitions.
Haydon and colleagues (2012) used an ABCBC with-
drawal single-case design to compare the effects of the combi-
nation of active supervision and precorrection with and
without an explicit timing procedure on the number of
teacher redirections and number of minutes during transition
before a seventh-grade health science class. The baseline
phase (A) lasted nearly 2 weeks; during this phase, the teacher
typically responded to inappropriate behavior by using con-
sequences. The first intervention phase (B) lasted nearly 3
weeks. In this phase, the teacher implemented active supervi-
sion and precorrection. The second intervention phase (C)
lasted 1 week. Here, the teacher implemented a combination
of active supervision, precorrection, and explicit timing. The
teacher used a digital timer and placed the timer on an over-
head projector and reminded the students they had 2 min to
be seated at their desks and then be ready to start the first
classroom activity. The return to the second (B) phase lasted
nearly 2 weeks and the reintroduction of the second (C) phase
also lasted 2 weeks. In addition, the researchers used a main-
tenance check 8 days after the second intervention phase.
Results indicated that the teacher had fewer redirections and
decreases in the number of minutes of transition time when
active supervision, precorrection, and explicit timing were in
place. The present study is a replication of Haydon and col-
leagues’ (2012) study. The following are the primary ques-
tions that guided this research:
1. What is the effect of active supervision, precorrection,
explicit timing procedure on the level of student problem
behavior?
2. What is the effect of active supervision, precorrection,
explicit timing on the duration of transition?
Method
Setting and Participants
This study was conducted at an urban high school (Grades
9–10) with an average daily attendance of 517 students. The
high school was located in a large school district in a Mid-
western U.S. state and had programs focusing on science,
technology, engineering, and mathematics. The school had a
Positive Behavior Intervention and Support initiative with a
leadership team comprising an administrator and teachers
from various grade levels (Rhodes, Stevens, & Hemmings,
2011).
Proficiency test results on the Local Report Card for the
school included 9th- and 10th-grade reading, writing, mathe-
matics, science, and social studies achievement. With the
state requirement being 75%, 10th-grade results included
82% on reading, 89% on writing, 70.3% on mathematics,
55.2% on science, and 65.1% on social studies. The student
population in the school was Black non-Hispanic (86%), mul-
tiracial (3.4%), and White non-Hispanic (9.4%). Eighty-four
percent of the students received free or reduced-price lunch,
and 26% were identified with disabilities. The student popula-
tion in the classroom in which the study took place was 100%
Black non-Hispanic.
As part of ongoing consultation with the local university
the high school’s ninth-grade team contacted the first and sec-
ond author to help with transitions. Observation data con-
firmed the need to positively support student transitions from
the hallway to learning tasks, and transition was especially
problematic after the return from lunch. The remaining
teachers on the team indicated that they did not need assis-
tance with transitions and observational data verified their
perception.
Three teachers participated in this study. The lead teacher
had 21 years of teaching experience in the school district and
was certified by the National Board for Professional Teach-
ing Standards in English language arts. The co-teacher had
13 years of teaching experience in the district and was certi-
fied in secondary social studies. The student teacher was in
his second and final year of obtaining a master’s degree in
secondary education. The lead teacher, co-teacher, and the
student teacher shared the responsibility of planning
Active Supervision, Precorrection, and Explicit Timing 71
instruction. The teachers co-taught two subjects (history and
English) in an interdisciplinary fashion in 200-min blocks.
Each subject had 60 students in one large room.
Response Definitions and Measurement
The primary dependent variable for this study was the fre-
quency of student problem behaviors. Problem behavior was
defined as any event in which a student was observed push-
ing, shouting, throwing, and/or whistling (Colvin et al.,
1997). Pushing was defined as any time a student used his or
her arm or body to make physical contact with another stu-
dent, resulting in that student being unbalanced or moved.
Shouting was defined as any occurrence in which a single stu-
dent’s voice could be heard noticeably above the normal con-
versation level present in the classroom. Throwing was
defined as any time a student picked up and tossed an object
(i.e., pencil, book or other objects) at other students. Last,
whistling was tallied if the production of short, high-pitched
sound by means of carefully controlling a stream of air flow-
ing through a small hole of one’s lips was heard.
Transition time within the class served as the secondary
dependent variable. Transition time was defined as, after the
sounding of the school bell, the number of seconds it took for
all students to be seated at their desks, writing in response to
a warm-up prompt, composing sentences with new vocabu-
lary, sharing examples with a peer, or with eyes on their
binder, materials or teacher. When all students were demon-
strating these in-seat behaviors (i.e., according to the lead
teacher’s expectations and criteria), the transition period was
defined as over. Data were collected starting at the beginning
of each observation period. Transition time was measured by
using duration recording. Student problem behaviors were
measured using an event recording method. Observers sys-
tematically scanned the room during 20-s observation periods
for 12 min, moving from left to right. Observers used event
recording and indicated on their observation sheet if
they observed an incident of student problem behavior during
the interval. The two data collectors were seated in the side of
the classroom where they had an unobstructed view of the
classroom.
Teacher Training
Before the study began the lead teacher, co-teacher, and stu-
dent teacher identified the major problem behaviors exhibited
by the students during the two transition periods. Examples
of problem behaviors included hitting, pushing, whistling,
and yelling. Next, the lead teacher, co- teacher, and student
teacher identified those behaviors that students should dis-
play instead of problem behaviors. The first two authors, the
lead teacher, co-teacher, and student teacher then developed
all procedures during a 30-min morning meeting. The goal
for the transition was to have a 4-min transition, starting
when the time the bell rang and ending with all students dem-
onstrating the defined in-seat behaviors. Each student upon
entering the room was asked to be seated and have materials
on the desk within 1 min; as a classroom, all students were
prompted to begin a warm-up posted on the screen and be
ready to learn within 4 min. These expectations and a warm-
up prompt were posted daily on a slide presentation.
Next, the first two authors trained the lead teacher, co-
teacher, and student teacher on the implementation of three
major components, active supervision, precorrection, and an
explicit time procedure, in a 30-min training session. The
intervention was modeled by the investigators during the
training and questions and comments about the intervention
were addressed. After the second author demonstrated the
interventions, the teachers verbally indicated that they under-
stood each component of the intervention and that they were
satisfied with having the procedures explained verbally and
the specific behaviors modeled and so the training was
concluded.
Active Supervision
The first and second authors provided the lead teacher, co-
teacher, and student teacher with a definition of active super-
vision. In active supervision, a teacher (a) circulates around
the classroom, (b) scans the classroom, (c) interacts with stu-
dents, and (d) acknowledges demonstrations of expected aca-
demic and social behaviors as part of instruction. The second
author role-played each component of active supervision
with the teachers to establish what the intervention looked
like (e.g., circulating throughout the classroom from the four
corners and the center of the room, visually sweeping the
classroom), taking attendance, and making positive com-
ments to students who were working on the assigned task.
In addition, the second author role-played a nonexample
of active supervision by standing in one location of the class-
room. At the request of the teachers the two researchers pro-
vided a script of the behaviors to follow for accurate
implementation of the intervention (see Table 1).
Precorrection
The first and second authors, along with the lead teacher, co-
teacher, and student teacher, developed a precorrection pro-
cedure that provided prompts and reminders when the stu-
dents entered the room in the following manner: The teachers
were instructed to remind students of desired behavior before
entering the room as part of the precorrection strategy. Spe-
cifically, they reminded the students to enter and focus,
remain seated, have a pen and binder, and complete the
warm-up in silence.
Explicit Timing Procedure
All three teachers were trained in the implementation of the
explicit timing procedure that consisted of (a) announcing to
the students when they entered the classroom that they had a
4-min time limit (“On the clock”) (b) telling the classroom
they had “one minute” left to start working on the warm-up,
and (c) prompting the students with statements such as “It’s
time,” or “Ready to go,” to indicate the end of transition
time and the start of the warm-up activity. At the end of the
training the two researchers provided the lead teacher, co-
teacher, and the student teacher with a script of the behaviors
72 Haydon and Kroeger
to follow for accurate implementation of the intervention (see
Table 1).
Daily Data Review
Throughout the study, the researchers presented daily feed-
back in the form of visual graphs and brief notes on the fre-
quency of problem behaviors as well as the amount of
transition time. Feedback was sent via e-mail by the second
author to the lead teacher, co-teacher, and student teacher.
The lead teacher, co-teacher, and student teacher acknowl-
edged the receipt of the daily e-mails and were provided
opportunities to respond to daily feedback via e-mail or face
to face at the next observation period.
Interobserver Agreement
The second author served as primary observer and the first
author, two school psychology doctoral students, or two
senior undergraduate students, who had taken an applied
behavior analysis class, acted as secondary observers and
independently recorded data. The secondary observer also
completed the treatment integrity checklist at the end of each
session. During each session, there were two observers in the
classroom, except during 15% of the sessions a third observer
was used to calculate interobserver agreement on integrity.
All observers were blind to any phase of the study in that
the place on the coding sheet to indicate the session remained
blank, nor did any of the secondary observers ask what phase
the study was in. The secondary observers also completed the
interobserver agreement for transition time were calculated
by dividing the number of agreements, sessions where both
observers were within 5 s (the time it took to observe the last
student in their seat and then record that behavior) by the
total number of sessions and multiplying by 100. Interob-
server agreement for student problem behavior was calcu-
lated using same method as the Haydon and colleagues
(2012) procedure. We first divided each session into 20-s
intervals and counted the number of problem behavior in
each interval. We then divided the total number of agreed-
upon intervals by the total number of intervals and multiplied
by 100%. An agreement was counted when both observers
recorded that the behavior occurred or did not occur in the
same interval. This was a more rigorous method of demon-
strating interobserver agreement and reduced potential bias
in the data collection procedure. Interobserver agreement
scores were calculated for 44.4% of the observations for each
phase of the study. Interobserver agreement averaged 93.7%
(range D 88.4–100%) for student problem behavior and
100% for transition time.
Across phases, the interobserver agreement percentages
were calculated. During the initial baseline, interobserver
agreement was 92.6% in the morning classroom and 88.4% in
the afternoon classroom. During the first intervention phase,
interobserver agreement was 100% in both the morning and
afternoon classrooms. During the withdrawal phase, the
interobserver agreement was 93.0% in the morning classroom
and 95.0% in the afternoon classroom. During the second
intervention phase, the interobserver agreement was 92.3% in
Table 1. Sample Script for Active Supervision, Precorrection, and Explicit Timing
Criteria for implementation
1. Identify major problem behaviors exhibited by students during transition periods.
a. Pushing, shouting, throwing, and whistling
2. Identify positive replacement behaviors.
a. Enter and focus
b. Remain seated
c. Must have pen and binder
d. Remain silent during warm-up
3. Design procedures for the transition period.
a. Determine the amount of transition time based on the number of students (4 min)
b. Post expected behaviors in the classroom
4. Practice teacher behaviors before implementation.
a. Active supervision: scan and interact with students
b. Precorrection: remind students of expected behaviors
c. Explicit timing: display a timing device that all students may see
Measure Situation Example
Active supervision As students enter the classroom. Scanning: teacher looks over the length of the area to be supervised
Active supervision While students are walking
toward their desks. While students
sit at their desks.
Interacting: circulating around the classroom; talking to a student
or engaging a student nonverbally; smiling, signaling, prompting,
acknowledging expected (on-task) behavior
Precorrection As students enter the classroom. Verbal reminders such as “Check the board for rules about how to enter
class and get started”; “As you enter class, remember, we have four
minutes to begin quiet work”; and “You are on the clock”
Explicit timing During the transition period. Display a timing device on a overhead projector, or PowerPoint
presentation for all students to see; convey verbal reminders such
as “One minute”; “It’s time”; and “Start the warm-up”
Active Supervision, Precorrection, and Explicit Timing 73
the morning classroom and 91.0% in the afternoon class-
room. During the maintenance phase, the interobserver
agreement was 91.0% in the morning classroom and 90.5% in
the afternoon classroom.
Procedures
Baseline
During baseline, the lead teacher, co-teacher, and student
teacher typically positioned themselves in one part of the
classroom (i.e., side, doorway, front, and back) and raised
their voice to reprimand problem behaviors such as running,
hitting, whistling, and having loud conversations. The lead
teacher usually stood in the doorway and told the students to
“take a seat” as they entered the classroom. Next, he took
attendance from the side of the classroom while the other
teacher stayed near his desk at the front of the room and the
student teacher stayed near his desk at the back of the room.
Classroom rules were not posted in the classroom and obser-
vations indicated that the teachers did not engage in positive
interactions (i.e., praise statements). Typically, as one teacher
quieted the students in the front of the room, students in the
back of the room began shouting, talking, and laughing.
Intervention
The lead teacher carried out the active supervision procedure
while the co-teacher and student teacher carried out the pre-
correction procedure. The lead teacher circulated around the
entire length and four corners of the classroom, interacting
with students, and reinforcing expected academic and social
behaviors by saying, “This group is working well.” The other
teacher and student teacher stood in the door way as students
entered the classroom and reminded students of the expected
behaviors. The following behaviors were listed on the slide
and projected on the screen: “Enter and Focus,” “Remain
Seated,” “Must Have Pen and Binder,” and “Silent Warm-
Up.” These prompts were projected on the screen through-
out the study and remained there for the first 15 min of each
period.
In addition to taking attendance and using active supervi-
sion, the lead teacher conducted the explicit timing procedure
by using a stopwatch and announcing a 4-min time limit,
“You are on the clock.” Next, the lead teacher informed the
students, “You have one minute.” Then, he stated that there
were “thirty seconds” left. Last, he provided a countdown
from 10 to 1. At the end of the countdown, the teacher
prompted behavior with “It’s time,” “You are working,” or
“Ready to go” to indicate the end of transition time and the
start of the warm-up activity.
Experimental Design
A concurrent multiple baseline across two instructional peri-
ods with a brief withdrawal phase was used (Kennedy, 2005).
During the brief withdrawal phase, the teachers did not use
any components of the intervention package. Withdrawal
constituted the removal of all active supervision, precorrec-
tion features, and explicit timing procedures. Daily data
review to the teachers continued. Because of the high rate of
student problem behavior, the teachers indicated that they
would like to reintroduce the intervention after one session.
Therefore, the intervention was reintroduced after the collec-
tion of one data point.
A maintenance phase of the study was implemented to
determine whether the teachers would continue to use the
intervention. Approximately 3 weeks after the end of the last
intervention phase, and the first day back from winter break,
data were collected on students’ problem behaviors in the
morning and afternoon sessions. Unannounced maintenance
checks were completed once per week for 8 weeks. During
maintenance, data were collected on the rate of student prob-
lem behavior and the amount of transition time in observa-
tion time periods (morning and afternoon) during the
beginning of class. Daily data review was provided using the
same procedures as in the earlier phases of the study.
Treatment Integrity
Direct measurement of the independent variable, implemen-
tation of the lead teacher’s active supervision procedure (i.e.,
scanning, moving, interacting with students) and explicit tim-
ing procedure, as well as the co-teacher and student teacher
precorrection procedure was conducted as a measure of treat-
ment integrity for 100.0% of the sessions of observations. In
addition, during 15% of the sessions a secondary observer
was used to calculate interobserver agreement on integrity.
Although calculating interobserver agreement on integrity is
not typically done (Yarbrough, Skinner, Lee, & Lemmons,
2004), doing so provides more support for the claim that the
treatment was implemented as intended (Noell & Witt, 1998).
A checklist was used to record the occurrence or nonoc-
currence of each step of the intervention package. For active
supervision, the checklist included (a) moving around the
four corners of the room as well as the middle of the room,
(b) interacting with students (the teachers used a microphone
so all verbalizations could be heard). For the explicit timing
procedure, the checklist included announcing (a) 4-min time
limit, (b) 1-min time limit, (c) 30-s time limit and (d) the
countdown from 10 to 1. The checklist for the precorrection
procedure included the step of reminding the students of the
classroom rules as they entered the classroom.
Social Validity
The lead teacher, co-teacher, and a student teacher completed
a nine-item social validity assessment at the end of the study.
A 4-point Likert scale ranging from 4 (strongly agree) to 1
(strongly disagree) was used to determine the social validity
and the teacher’s perceptions of the educational effectiveness
of the interventions. Teachers rated statements using a 4-
point Likert type scale ranging from 1 (not at all) to 4 (very
much). The rating scale consisted of four categories: (a) teach-
er’s perceived ease of implementing the intervention, (b)
teacher’s perceived effectiveness of the intervention, and (c)
teacher’s likelihood of using the intervention in the future
and (d) how likely they would recommend the intervention to
other teachers.
74 Haydon and Kroeger
Results
Figure 1 displays the frequency of student problem behavior
as well as the amount of transition time for both the after-
noon and morning sessions.
Student Problem Behavior
Baseline (BL) data for observation period 1 (afternoon) are
highly variable with a mean occurrence frequency of 18.83
(range D 4–29) in general. Following the implementation of
active supervision, precorrection, and explicit timing (IV1),
there was an immediate change in level resulting in a mean
frequency of problem behavior of 3.0 (range D 0–7). The
brief withdrawal phase (W) resulted in an immediate increase
in student problem behavior that was followed by an immedi-
ate reduction in the level of student problem behavior (mean
frequency D 1.75; range D 0–3), a stable trend and little vari-
ability when the intervention was reintroduced (IV2). The fre-
quency of student problem behavior remained at low levels
during the maintenance phase (M), thereby demonstrating
the sustainability of this intervention.
Baseline data patterns for observation period 2 (morning)
demonstrated an upward trend with less variability than
observation period 1 (mean frequency D 7.9; range D 4–13).
The initial intervention phase (IV1) resulted in an immediate
change in level and a decreasing trend for student problem
behavior (mean frequency D 2.4; range D 0–9). Data in the
brief withdrawal phase indicated an immediate change in fre-
quency for student problem behavior. The reintroduction of
the intervention phase was characterized by an immediate
change in level for student problem behavior with a mean fre-
quency of 0.75 (range D 0 to 3), a slight decreasing trend and
little variability. During the maintenance phase, the fre-
quency of student problem behavior remained at low levels,
thereby demonstrating the sustainability of this intervention.
Transition Time
Baseline data for observation period 1 (afternoon) indicate
that mean transition time was 8 min, 54 s (range D 3 min to
12 min, 58 s). After the implementation of active supervision
and precorrection, the amount of transition time was reduced
to an average of 3 min, 42 s (range D 1 min to 4 min, 24 s).
Fig. 1. Frequency of student problem behavior and the amount of transition time for afternoon and morning sessions.
Active Supervision, Precorrection, and Explicit Timing 75
The brief withdrawal phase resulted in transition time of
3 min followed by an average transition time of 2 min and
54 s (range 2 min to 4 min) when the intervention was rein-
troduced. During the maintenance phase, 7 out of 8 sessions
met the criteria of no more than 4 min of transition time.
Baseline data for observation period 2 (morning) indicate
that mean transition time was 5 min and 56 s (range D 4 min
to 8 min 43 s). After the implementation of active supervision
and precorrection, the amount of transition time per session
was reduced to an average of 3 min and 36 s (range D 2 min
to 4 min). The brief withdrawal phase resulted in transition
time of 5 min followed by an average transition time of
3 min and 24 s (range D 3 min to 4 min) when the interven-
tion was reintroduced. During the maintenance phase, 7 out
of 8 sessions met the criteria of no more than 4 min of transi-
tion time.
Treatment Integrity
Treatment integrity data were collected for each teacher to
assess the implementation of each component of the interven-
tion. The lead teacher implemented the two-step active super-
vision procedure as well as the four-step explicit timing
procedure with 100% integrity. In addition, the co-teacher
and student teacher implemented the precorrection procedure
with 100% integrity.
Social Validity
One week after the collection of the last maintenance data
point, the three teachers completed the social validity ques-
tionnaire. Mean scores for each question were calculated by
totaling each teacher’s response and dividing by three. Mean
scores (M D 3.0; range D 2 to 4) on teachers’ perceived ease
with the study’s procedures suggested that the three teachers
implemented active supervision and the precorrection proce-
dure with a fair amount of ease. In response to how effective
and efficient the intervention was on reducing behavioral inci-
dents all three teachers gave the highest score of 4.0. High
mean scores (M D 4.0) suggested that teachers found the
intervention to be very successful and that they would con-
tinue to use the intervention in the future and recommend the
intervention to other teachers.
Discussion
Developing clear expectations and establishing common rou-
tines is an important process in any classroom, whether co-
taught or in a classroom with a single instructor. Taylor-
Greene and colleagues (1997) concluded that schools need to
combine systems of schoolwide behavioral support, individ-
ual student support, classroom behavioral strategies, and spe-
cific setting procedures to address a wide range of behavioral
challenges. This study aimed to systematically replicate an
earlier investigation using active supervision, precorrection,
and an explicating timing procedure (Haydon et al., 2012).
The findings in this study indicate that there are a few simi-
larities and differences with the earlier study by Haydon and
colleagues (2012). For example, the combination of active
supervision, precorrection, and an explicit timing procedure
were implemented in both studies. However, in the earlier
study active supervision and precorrection were compared
with and without an explicit timing procedure. The present
study made no such comparisons. Although transition time
was a dependent variable in both studies, in the earlier study
teacher behavior (redirections) was a dependent variable,
whereas in the present study student behavior (disruptive
behavior) was the dependent variable. Both studies included
a maintenance phase. However, in the present study, the
maintenance phase consisted of once a week probes for 8
weeks, while the earlier study used one maintenance probe
8 days after the last data point of the intervention. In the
present study, three teachers were used to implement the
intervention in a large co-taught ninth-grade classroom,
whereas in the earlier study the intervention was implemented
with one teacher in a seventh-grade classroom.
In a classroom with fewer students a teacher could be suc-
cessful implementing a similar intervention with ease. For
example, a single teacher could, remind students when enter-
ing the classroom of the expectations of the transition period,
use active supervision while taking attendance, then give a 1-
min reminder and tell students to start the warm-up activity.
The results of this study suggest that general educators
with a small amount of training time (i.e., 30 min) can reduce
student problem behavior and the amount of transition time
by implementing a feasible intervention package consisting
of active supervision, precorrection, and explicit timing By
using the intervention the teachers were able to provide sup-
ports for students during transitions by creating highly struc-
tured environments (i.e., class to class and cafeteria to class).
Furthermore, the teachers implemented the intervention with
a high degree of treatment adherence. An indication of the
overall effectiveness and sustainability of the intervention
was that the positive results of the intervention were main-
tained over a period of two months.
The results of this study also provide additional evidence
that active supervision, precorrection, and explicit timing can
be effective in decreasing the amount of transition time. The
decrease in transition time may be associated with an increase
in academic instruction time, although the study does not
provide data, anecdotal reports indicate that academic
instruction occurred earlier during the intervention than dur-
ing baseline. These results are noteworthy because decreased
student problem behavior has been associated with increases
in on-task behavior and increases in instruction time thus
providing environmental supports for student learning (Clark
& Linn, 2003; Harn, Linan-Thompson, & Roberts, 2008).
Another positive outcome from this study is that high
social validity scores indicate that the intervention was imple-
mented with ease and was perceived as being effective. The
researchers hypothesized that ease of implementation may
have been due to the fact that the intervention was built into
the existing classroom routines. For example, the teachers
were already taking attendance and there was only a slight
teacher behavioral change, from leaning next to the counter
76 Haydon and Kroeger
and commenting on negative student behaviors to walking
around, interacting, and commenting on positive student
behaviors.
A unique feature of the present study is that the interven-
tion was implemented in an urban high school setting. High
school settings can provide unique challenges where it may
be more difficult for teachers to engage in an array of proac-
tive behavioral support procedures. High schools are complex
organizations and generally have multiple administrators,
large numbers of personnel and students. Other challenges at
the secondary level include students struggling with expecta-
tions of learning tasks, reading levels and textbook readabil-
ity, including the introduction of significant levels of new
vocabulary and content specific academic language
(Armbruster & Anderson, 1988; Bean, Zigmond, & Hartman,
1994; Groves, 1995; Kinder, Bursuck, & Epstein, 1992).
When considering the results of this study, several limita-
tions must be noted. First, the study was conducted in one
ninth-grade classroom with three teachers in an urban high
school. Therefore, the positive results of the intervention may
be unique to this setting and may not generalize to other high
schools. Furthermore, the teachers in this study self-identified
the need for assistance and actively took steps to recruit uni-
versity support. The fact that the teachers volunteered could
have inflated the treatment adherence data. However, the
positive effects of the intervention on student problem behav-
ior give some indication that the intervention may be effective
in other high schools. Third, because of teachers’ preferences,
a full withdrawal of the intervention was not possible; thus,
threats to internal validity could not be ruled out (Campbell
& Skinner, 2004). Even so, data in the brief withdrawal probe
indicate high rates of student problem behavior. A fourth
limitation of this study was that the researchers started the
intervention when student baseline problem behavior in the
afternoon group was improving (see Figure 1); thus, we can-
not be confident that the intervention was responsible for the
improvement. A third baseline condition could have
addressed this limitation and demonstrated stronger experi-
mental control (Boden, Ennis, & Jolivette, 2012; Hoyle, Mar-
shall, & Yell, 2011). Fifth, because the teachers knew they
were being observed, the scores on the treatment integrity
checklists could have been inflated (Podsakoff, MacKenzie,
Lee, & Podsakoff, 2003). Last, because researchers completed
the checklists, potential observer effects should be noted.
Future research should include continued analyses to
determine the extent to which individual components of
active supervision (moving, interacting, scanning), precorrec-
tion, explicit timing, and performance feedback contribute to
the observed changes in student and school staff behavior. In
particular, the positive and negative nature of these interac-
tions should be investigated. Future research could examine
the extent to which these components are necessary under
various environments (e.g., classroom, hallway, cafeteria,
recess). Future studies could include a more robust with-
drawal phase. However, once teachers implement an effective
practice they are understandably hesitant to remove it even
for a short time. Future research could also investigate the
effects of various components on reduced transition time and
increased academic learning time.
Teachers may use active supervision, precorrection, and
explicit timing as one method to improve student-teacher
interactions. Rather than reprimanding students, teachers
can create environments where little to no reprimands are
necessary. As a replacement behavior teachers can recognize
positive behavior and provide feedback in the form of behav-
ior specific praise statements(Partin, Robertson, Maggin, Oli-
ver, & Wehby, 2010). Furthermore, the combination of the
strategies may also help teachers prevent serious behavior
from escalating by providing reminders of classroom rules
and expectations. For example, after a tough incident before
class such as a fight in the hallway (as was the case in this
study) teachers could provide reminders “Stay with the rou-
tine” and then provide feedback, “Great recovery.”
Taken together, the present results document the feasibil-
ity and effectiveness of implementing active supervision, pre-
correction, and an explicit timing procedure during two
transition periods in a ninth-grade classroom in an urban
high school. Data indicated that student problem behaviors
in the morning and afternoon sessions continued below base-
line. The positive social validity ratings by teachers and sus-
tained use give some indication of a good contextual fit
because the teachers made only slight behavior changes to
successfully implement the intervention.
Author Notes
Todd Haydon is an associate professor at the University of
Cincinnati. His current research interests are effective teach-
ing practices, students with behavioral disorders, and positive
behavior and supports.
Stephen D. Kroeger is an associate professor at the University
of Cincinnati. His current research interests are racial aware-
ness development with prospective teachers as well as collab-
oration across teacher preparation programs.
References
Ahearn, W. H. (2010). Sidman on aversive control. Behavior and Philos-
ophy, 38, 149–151.
Armbruster, B. B., & Anderson, T. H. (1988). On selecting “consider-
ate” content area textbooks. Remedial and Special Education, 9,
47–52.
Beaman, R., & Wheldall, K. (2000). Teachers’ use of approval and dis-
approval in the classroom. Educational Psychology, 20, 431–446.
Bean, R. M., Zigmond, N., & Hartman, D. K. (1994). Adapted use of
social studies textbooks in elementary classrooms: Views of class-
room teachers. Remedial and Special Education, 15, 216–226.
Boden, L. J., Ennis, R. P., & Jolivette, K. (2012). Implementing Check
In/Check Out for students with intellectual disability in self-con-
tained classrooms. Teaching Exceptional Children, 45, 32–39.
Bohanon, H., Fenning, P., Carney, K. L., Minnis-Kim, M. J., Ander-
son-Harriss, S., Moroz, K. B., . . . Pigott, T. D. (2006). Schoolwide
application of positive behavior support in an urban high school: A
case study. Journal of Positive Behavior Interventions, 8, 131–145.
Campbell, S., & Skinner, C. H. (2004). Combining precorrection with an
interdependent group contingency program to decrease transition
times: An investigation of the timely transitions game. Journal of
Applied School Psychology, 20, 11–27.
Active Supervision, Precorrection, and Explicit Timing 77
Clark, D., & Linn, M.C. (2003). Designing for knowledge integration:
The impact of instructional time. Journal of the Learning Sciences,
12, 451–493.
Colvin, G., Sugai, G., Good, III, R. H., & Lee, Y. (1997). Using active
supervision and precorrection to improve transition behaviors in
an elementary school. School Psychology Quarterly, 12, 344–363.
DeMeo, W. (2012). School-wide positive school culture implementation
audit. Empowering Education: Consultation and Systems Support
Services, LLC. Retrieved from http://empoweringeducation.net/
services/program-development-evaluation
De Pry, R. L., & Sugai, G. (2002). The effect of active supervision and
pre-correction on minor behavioral incidents in a sixth-grade gen-
eral education classroom. Journal of Behavioral Education, 11,
255–267.
Flannery, K. B., Sugai, G., & Anderson, C. M. (2009). School-wide pos-
itive behavior support in high school: Early lessons learned. Journal
of Positive Behavior Interventions, 11, 177–185.
Franzen, K., & Kamps, D. (2008). The utilization and effects of positive
behavior support strategies on an urban school playground. Jour-
nal of Positive Behavior Interventions, 10, 150–161.
Groves, F. H. (1995). Science vocabulary load of selected secondary sci-
ence textbooks. School Science and Mathematics, 95, 231–235.
Harn, B. A., Linan-Thompson, S., & Roberts, G. (2008). Intensifying
instruction: Does additional instructional time make a difference
for the most at-risk first graders? Journal of Learning Disabilities,
41, 115–125.
Haydon, T., DeGreg, J., Maheady, L., & Hunter, W. C. (2012). Using
active supervision and precorrection to improve transition behav-
iors in a middle school classroom. Journal of Evidence-Based Prac-
tices for Schools, 1, 81–97.
Hoyle, C. G., Marshall, K. J., & Yell, M. L. (2011). Positive behavior
supports: Tier 2 interventions in middle schools. Preventing School
Failure, 55, 164–170.
Johnson-Gros, K. N., Lyons, E. A., & Griffin, J. R. (2008). Active
supervision: An intervention to reduce high school tardiness. Edu-
cation and Treatment of Children, 31, 39–53.
Kazdin, A. E., & Klock, J. (1973). The effect of nonverbal teacher
approval on student attentive behavior. Journal of Applied Behavior
Analysis, 6, 643–654.
Kennedy, C. H. (2005). Single-case designs for educational research. Bos-
ton, MA: Allyn & Bacon.
Kinder, D., Bursuck, W. D., & Epstein, M. H. (1992). An evaluation of
history textbooks. Journal of Special Education, 25, 472–291.
Lewis, T. J., Colvin, G., & Sugai, G. (2000). The effects of precorrection
and active supervision on the recess behavior of elementary stu-
dents. Education and Treatment of Children, 23, 109–121.
Madsen, C. H., Becker, W. C., & Thomas, D. R. (1968). Rules, praise,
and ignoring: Elements of elementary classroom control. Journal of
Applied Behavior Analysis, 1, 139–150.
Madsen, C. H., Becker, W. C., & Thomas, D. R. (2001). Rules, praise,
and ignoring: Elements of elementary classroom control. Journal of
Direct Instruction, 1, 11–25.
Nelson, J. R., Smith, D., & Colvin, G. (1995). The effects of a peer medi-
ated self-evaluation procedure on the recess behavior of students
with behavior problems. Remedial and Special Education, 16,
117–126.
Oswald, K., Safran, S., & Johanson, G. (2005). Preventing trouble:
Making schools safer places using positive behavior supports. Edu-
cation and Treatment of Children, 28, 265–278.
Partin, T. C., Robertson, R. E., Maggin, D. M., Oliver, R. M., &
Wehby, J. H. (2010). Using teacher praise and opportunities to
respond to promote appropriate student behavior. Preventing
School Failure, 54, 172–178.
Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P.
(2003). Common method biases in behavioral research: A critical
review of the literature and recommended remedies. Journal of
Applied Psychology, 88, 879–903.
Renihan, F. I., & Renihan, P. J. (1995). Responsive high schools: Struc-
turing success for the at risk student. High School Journal, 79, 1–15.
Rhodes, V., Stevens, D., & Hemmings, A. (2011). Creating positive cul-
ture in a new urban high school. High School Journal, 94, 82–94.
Rhymer, K. N., Skinner, S. H., Jackson, S., McNeill, S., Smith, T., &
Jackson, B. (2002). The 1- minute explicit timing intervention: The
influence of mathematics problem difficulty. Journal of Instruc-
tional Psychology, 29, 305–311.
Scruggs, T. E., Mastropieri, M. A., Berkeley, S., & Graetz, J. (2011). Do
special education interventions improve learning of secondary con-
tent? A meta-analysis. Remedial and Special Education, 36,
437–449.
Scruggs, T. E., Mastropieri, M. A., & McDuffie, K. A. (2007). Co-teach-
ing in inclusive classrooms: A meta-synthesis of qualitative
research. Exceptional Children, 73, 392–416.
Sidman, M. (2001). Coercion and its fallout. Boston, MA: Authors
Cooperative, Inc.
Siskin, L. (1994). Realm of knowledge: Academic departments in second-
ary schools. Washington, DC: Falmer.
Stormont, M. A., Smith, S. C., & Lewis, T. J. (2007). Teacher implemen-
tation of precorrection and praise statements in Head Start class-
rooms as a component of a program-wide system of positive
behavior support. Journal of Behavioral Education, 16, 280–290.
Taylor-Greene, S., Brown, D., Nelson, L., Longton, J., Gassman, T.,
Cohen, J., . . . Hall, S. (1997). School-wide behavioral support:
Starting the year off right. Journal of Behavioral Education, 7,
99–112.
Thomas, D. R., Becker, W. C., & Armstrong, M. (1968). Production
and elimination of disruptive classroom behavior by systematically
varying teacher’s behavior. Journal of Applied Behavior Analysis, 1,
35–45.
Van Houten, R., Nau, P.A., MacKenzie-Keating, S. E., Sameoto, D., &
Colavecchia, B. (1982). An analysis of some variables influencing
the effectiveness of reprimands. Journal of Applied Behavior Analy-
sis, 15, 65–83.
Warren, J. S., Edmonson, H. M., Griggs, P., Lassen, S. R., Mccart,
A., Turnbull, A. H., & Sailor, W. (2003). Urban applications
of school-wide positive behavior support: Critical issues and
lessons learned. Journal of Positive Behavior Interventions, 5,
80–91.
Yarbrough, J. L., Skinner, C. H., Lee, Y. J., & Lemmons, C. (2004).
Decreasing transition times in a second grade classroom: Scientific
support for the timely transitions game. Journal of Applied School
Psychology, 20, 85–107.
78 Haydon and Kroeger
http://empoweringeducation.net/services/program-development-evaluation
http://empoweringeducation.net/services/program-development-evaluation
Effect
s
of Tiered Training on General Educators’ Use of
Specific Praise
Michele Terry Thompson, Michelle Marchant, Darlene Anderson, Mary Ann
e
Prater, Gordon Gibb
Education and Treatment of Children, Volume 35, Number 4, November
2012, pp. 521-546 (Article)
Published by West Virginia University Press
For additional information about this article
Access provided at 21 Mar 2019 01:56 GMT from UNSW Library
https://muse.jhu.edu/article/487097
https://muse.jhu.edu/article/487097
Correspondence to Michelle Marchant, Department of Counseling Psychology and
Special Education, 340-B MCKB, Brigham Young University, Provo, UT 84062; e-mail:
michelle_marchant@byu.edu.
EDUCATION AND TREATMENT OF CHILDREN Vol. 35, No. 4, 201
2
Pages 521-546
Effects of Tiered Training on General Educators’
Use of Specific Praise
Michele Terry Thompson
Michelle Marchant
Darlene Anderson
Mary Anne Prater
Gordon Gibb
Brigham Young University
Abstract
Research suggests a compelling correlation between teacher behavior and
effective learning environments. Focusing on the evidence-based teaching
skill of offering behavior-specific praise (BSP), the researchers worked
with three elementary-level general educators in a tiered model of training
generally known as response to intervention (RtI). Although RtI commonly
provides targeted instructional support to students, this study used the RtI
framework to provide professional development instruction to teachers.
The researchers also tracked the behavior of three students identified by
the teachers as having behavioral difficulties, who became the focus of each
teacher’s BSP. Results showed increases in rates of BSP following the Tier 2
and Tier 3 interventions (video self-monitoring and peer coaching), but not
following the Tier 1 intervention (school-wide in-service training). Averages
for all three students’ on-task behavior increased with increased teacher BSP.
Keywords: behavior-specific praise, response to intervention, faculty peer
coaching, video self-monitoring, professional development, tiered training
Improving public education is of national concern as many schools grapple with low achievement results in the context of legislative
mandates for increased student achievement and highly qualified
teachers (No Child Left Behind Act, 2001). The ability of a teacher to
manage student behavior has been emphasized as a skill that leads to
increased learning time and improved academic and social outcomes
(Simonsen, Fairbanks, Briesch, Myers, & Sugai, 2008). In particular,
the use of behavior-specific, contingent praise has been documented
as a teaching practice that consistently results in improved student
522 THOMPSON et al.
academic and social behavior (Cherne, 2009; Sugai, 2007). However,
significant evidence indicates that teachers rarely use praise effectively
in the classroom (Beaman & Wheldall, 2000; Brophy, 1981; Burnett,
2002; Ferguson & Houghton, 1992; Sutherland, Wehby, & Copeland,
2000). This study explored a tiered professional development structure
that was used to teach teachers to use behavior-specific contingent
praise in dealing with disruptive student behavior.
Background and Literature Review
Creating professional development systems that effectively
support and sustain teachers’ use of identified effective practices can
be difficult (Guskey & Yoon, 2009). Several forms and strategies are
common at present: meetings and workshops, self-monitoring, and
instructional coaching. These will be discussed in detail, below.
The most typical professional development strategy includes
meetings or workshops in which participants passively listen to di-
dactic instruction. Research suggests several drawbacks to this type of
teacher training (Sprick, Knight, Reinke, & McKale, 2006). First, little
to no follow-up training or implementation accountability occurs.
Second, passive delivery gives attendees few opportunities to practice
for skill mastery. Finally, and perhaps most important, little evidence
of generalization to classroom implementation exists (Elmore, 2002;
Garet, Porter, Desimore, Birmon, & Yoon, 2001; Garet, Wayne et al.,
2010; Fixsen, Naoom, Blasé, Friedmand, & Wallace, 2005; Myers, Si-
monsen, & Sugai, 2011; Yoon, Duncan, Lee, Scarloss, & Shapley, 2007).
Self-monitoring is a professional development strategy that pro-
vides teachers with data on which to reflect, making it effective for
changing a variety of behaviors in various settings (Kalis, Vannest,
& Parker, 2007). Kalis et al. had teachers self-monitor using a pocket
counter, which they clicked to record instances of behavior-specific
praise, with time allotted for analyzing the data. This simple cost-ef-
fective method makes the teacher aware of his or her use of a targeted
skill, but teachers must be able and committed to accurately collect
data during instruction.
Teachers can also self-monitor by video recording their lessons,
thus collecting data for evaluating self and student behaviors with-
out interrupting the flow of lesson delivery (Sherin & van Es, 2005)
and creating a permanent product that may decrease inaccuracy in
data collection. These self-monitoring tools offer reliable measures of
teacher behavior and enable an efficacious follow-up procedure that
has been shown to increase the likelihood of treatment implementa-
tion (Noell et al., 2005).
Autonomous performance feedback can be as simple as creating
523RtI AND TEACHER PRAISE
a graph of collected data, listening to audio recordings, or viewing
video. However, self-monitoring tactics, when employed without in-
volvement from an experienced peer such as a skilled instructional
coach, can be ineffective, confusing, and impractical to teachers, leav-
ing them without a clear path to positive change (Colvin, Flannery,
Sugai, & Monegan, 2009; Joyce & Showers, 1995; Sprick et al., 2006).
In addition to workshops and self-monitoring tools, instruction-
al coaching can be effective in teacher professional development (On-
chwari & Keengwe, 2008; Stichter, Lewis, Richter, Johnson, & Bradley,
2006). Instructional coaching addresses the needs identified by teach-
ers to tackle specific individual concerns, learn in collaborative pro-
fessional environments, and receive ongoing support by competent
peers (Guskey & Yoon, 2009).
Coaching is an intensive intervention requiring a positive work-
ing relationship with a colleague. Spontaneous coaching or consulta-
tion in natural settings can be as effective as more formalized coaching
structures. Research relating coaching to student outcomes is incon-
clusive (Garet, Porter et al., 2001); however some studies suggest im-
provements in teacher ability and confidence (e.g., Sprick et al., 2006).
Studies of coaching models and efficacy commonly suggest a
need for structure to the coaching process if it is to produce change
(Peterson, Taylor, Burnham, & Schock 2009; Sprick et al., 2006; Stich-
ter, Lewis, Richter, Johnson, & Bradley, 2006). Components include
(1) school-wide common classroom management practices, (2) ob-
servational guides, (3) pre-conferences to determine target teaching
skills, (4) post-conferences to collaboratively analyze direct observa-
tion data, (5) intervention choices (such as modeling or observation in
other classrooms), (6) goal setting and follow-up, and (7) repetition of
the process as needed.
These three professional development strategies, meetings and
workshops, self-monitoring, and instructional coaching, can be in-
corporated as tiered level support for teacher improvement. When
the three are used sequentially—with each increasing in intensity in
response to perceived need—the approach is similar to response to
intervention (RtI). Generally used for students, RtI is a multi-tiered
problem-solving approach used to proactively apply high quality evi-
dence-based learning strategies matched to student need according to
data (Ardoin, 2006; Barnett, Daly, Jones, & Lentz, 2004; Fuchs & Fuchs,
2006; Gresham, 2005). Numerous studies have been conducted using
RtI to support students’ academic and social behavior at school, yet
limited research has examined the use of RtI in professional develop-
ment for teachers (Coyne, Kame’enui, & Carnine, 2007; Kame’enui,
2007; Myers et al., 2011).
524 THOMPSON et al.
A study by Myers, Simonsen, & Sugai (2011) applied an RtI ap-
proach to enhance teacher behavior. Teacher participants were self-
nominated general and special educators in a middle school who had
contacted the researcher seeking assistance with excessive and dis-
ruptive student behavior. The study took place in schools that were
successfully implementing school-wide positive behavior support;
thus all staff had received training to support positive classroom be-
havior–considered the universal or primary intervention.
Myers et al. (2011) defined as essential criteria a ratio of four
positive to one negative interaction with students and six praise
statements per 15-minute observation. After training had been pre-
sented on behavior specific praise (BSP) rates and positive to negative
interaction ratios, a teacher evaluation was conducted to determine
acquisition of these skills. Four of the self-nominated teachers who
had not responded as desiring the training became the participants
in the study. These four received a secondary or more intensive inter-
vention, which included meeting weekly with the researcher who (a)
provided visual feedback in the form of a graph showing BSP rates,
positive to negative ratios, and student on-task behavior, (b) praised
improvements in teacher and student outcomes, (c) offered recom-
mendations for change, and (d) identified goals with the teacher for
the next observation. If criteria were not met at this level, a tertiary
intervention was introduced, which included a more intensive feed-
back schedule (following each observation), additional suggestions
for increasing praise rates and positive to negative ratios, and more
individualized support. Some of the noted limitations of the Myers
study are: (a) students were selected randomly by the trained observ-
ers versus by way of a pre-determined selection criteria, (b) teachers
were self-nominated, and (c) the schools were participating in school-
wide positive behavior support training and implementation.
This current study is a systematic replication of the Myers et al.
(2011) study. The limitations noted above informed the design of the
current study described in this manuscript. Differences are as follows:
(a) general education teachers in elementary school settings were the
targeted participants, selected from a pool of principal-nominated
teachers, (b) participating schools were not involved in or monitored
for school-wide positive behavioral support training, (d) intervention
included behavior specific praise training at the universal level, video
self-monitoring at the secondary level, and coaching at the tertiary
level, (e) data were collected on teacher-delivered BSP and student
on-task behavior, and (f) criteria for teacher praise rates (PR) were
determined using baseline PR with a percentage increase as opposed
to a pre-determined number of praises per minute. Table 1 indicates
differences as compared with the Myers et al. study.
525RtI AND TEACHER PRAISE
Myers et al. (2011) Thompson (2011)
Participant selection Self-nominated Principal-nominated
Participant criterion SWPBS training
P:R = reprimands greater than
praise
BSP rates < 50% of baseline
Setting Middle school in Northeast
US, implementing SWPBS
Elementary schools, Western US,
no SWPBS
Dependent variables BSP, general praise, P:R,
composite STOT
BSP,
targeted STOT
Independent
variables
RtI approach, adjusting
level of support according to
teacher performance
RtI approach, adjusting level
of support according to teacher
performance
Tier 1 intervention SWPBS training mastery Faculty training meeting on BSP
Tier 2 intervention Weekly 10-min consultation Video self-monitoring of BSP
Tier 3 intervention Increased consultation Coaching (consultation)
Movement criterion 6 BSP per 15 min, P:R = 4:1 BSP rates 50% > baseline
Table
1
Comparison of Myers et al. (2011) and Present Study
This study adds to the literature on effective professional de-
velopment methods by examining the effects of tiered interventions
on teacher behavior. The primary purpose of this study was to spe-
cifically evaluate the relationship between an RtI approach to teacher
training and the frequency with which general education teachers
implemented behavior-specific praise. This study was not designed
to explore the relationship between teachers’ delivery of BSPs and
student behavior; however, the researchers did collect supplemental
evidence of teachers’ BSPs corresponding effects on student behavior.
Two ancillary purposes of the study were to explore (a) the educators’
perceptions of the utility and effectiveness of the intervention, and (b)
the effects of elementary general educators’ behavior-specific praise
on the on-task behaviors of students identified as being disruptive.
Note. SWPBS = schoolwide positive behavioral support intervention plan; P:R =
ratio of praise to reprimand; BSP = behavior-specific praise; STOT = student time on-
task. Information for comparison is from “Increasing Teachers’ Use of Praise with a
Response-to-Intervention Approach,” by D. M. Myers, B. Simonsen, and G. Sugai,
2011, Education and Treatment of Children, 34(1), pp. 36–59.
526 THOMPSON et al.
Method
Participants
Selection process. The first author met with the principals of four
elementary schools to discuss the purpose of the study and request
names of three to four general education teachers per school who
might participate. The nominations were based on concerns for unre-
solved disruptive student behavior and/or specific requests of teach-
ers for additional behavior management support. Twelve teachers
(three in each of the four schools) were contacted by the first author
and by their principal and informed that behavior data would be col-
lected in their classroom in an effort to provide behavioral support
and selection purposes. Data were collected on the frequency of all
twelve nominated teachers’ BSP. Following this series of data, three
participant teachers were identified, and ultimately chosen, based on
three criteria: (a) principal nomination of a teacher who indicated that
she had one student with disruptive behavior, (b) frequency of BSP
less than one per 5-minute interval as observed by a district interven-
tion team paraeducator over several 15-minute observations, and (c)
agreement to participate in the study, as indicated by signing a con-
sent form.
The criteria for student participation were that the student (1)
did not at that time have a formal behavior intervention plan and (2)
would be present for the majority of the observation period. One was
chosen from the class of each teacher; consent from parents and assent
from students were obtained.
Teacher participants. Three white female elementary teachers–
“Anna,” “Jane,” and “Gail,” — participated in the study. All were be-
tween ages 40 and 50. Two had earned bachelor’s degrees in educa-
tion and accumulated over 10 years of classroom teaching experience,
and one had previously taught art as a classified employee and at the
time of the study was working towards certification in her first year of
an alternative licensure program (ARL). None of these teachers were
trained formally in Positive Behavior Support or RtI methods. Their
training in behavior and classroom management is minimal, deriving
only from their teacher preparation programs. (See Table 2.)
Student participants. Student participants were three Caucasian
males, ages 8, 10, and 11. All three were reported by their teacher to
be noncompliant and disruptive in class; one had an individualized
education plan (IEP).
The coach. The coach, the primary researcher for this study, was
a female certified special educator with a BA degree and 10 years of
teaching experience. She was working as a program specialist for the
district special education department.
527RtI AND TEACHER PRAISE
Participant/School Grade Years teaching Highest degree earned
Anna-A 4 11 Bachelors in education
Gail-C 4 13 Bachelors in education
Jane-B 3 1 BS; working on ARL
Table 2
Teacher Characteristics
Note. BS = bachelor of science; ARL = alternate route to licensure.
Settings
The study took place in three public elementary schools of a sub-
urban district in the Western United States.
School A. School A had a total student population of 691, with
22.1% qualifying for free and reduced price lunch, 1.6% learning
English as a second language, and 13.5% receiving special education
services. The study was conducted in a general education class of 31
fourth-grade students, including two who had IEPs and two who
showed attention difficulties. The classroom management system in-
cluded a “three strikes” approach: After three reminders to comply,
consequences followed each offense. Researchers noted a relaxed at-
mosphere with student-teacher relationships that were more familiar
than formal.
School B. School B had a total student population of 535, with
60.7% receiving free or reduced price lunch, 13.5% learning English as
a second language, and 18.5% qualifying for special education servic-
es. The study was conducted in a general education class of 26 second-
grade students, six of whom were designated by the teacher as having
attention and behavior difficulties. The classroom behavior manage-
ment strategy was a chart with colored cards: green for acceptable
behavior, yellow for failure to follow instructions despite reprimands/
corrections, red for teacher conference and parent contact. Students
with yellow or red cards lost certain privileges. The teacher interacted
with her students in a familiar manner and delivered consequences
directly.
School C. School C had a student population of 840, with 26.7%
receiving free and reduced price lunch, 3.5% learning English as
a second language, and 13.1% receiving special education services.
The study took place in a general education class of 26 fourth-grade
students. The teacher indicated that five students had IEPs and four
528 THOMPSON et al.
students had attention issues or non-compliant behaviors. A token
economy system was used, with each student receiving a ticket at the
beginning of the class with opportunities to earn additional tickets
throughout the day; later the tickets could “buy” items. The teacher
was approachable yet more formal than familiar with her students.
Dependent Variables
Teacher behavior. The general education teacher behavior re-
corded as the dependent variable in this study was the frequency of
behavior-specific praise statements for students’ academic and social
conduct. Behavior-specific praise (BSP) is defined as a verbal state-
ment (a) indicating approval, (b) describing a specific desired social or
academic behavior exhibited by the student, and (c) including a praise
word (e.g., great, appreciate, excellent).
• “Sam, I appreciate the way you asked James to join you in
the group activity.”
• “Jane, you did a great job helping Megan figure out that
problem.”
• “Troy, you defined that vocabulary word so well. Now you
will be able to understand the story!”
Vague positive statements not linked to specific behavior (“Great job!”
“Super!” “Good!”) were not acceptable.
Student behavior. On- and off-task student behaviors were ob-
served and recorded in conjunction with teacher behavior. On-task
behavior included students orienting themselves towards the teacher:
for example, taking notes on teacher lectures, raising a hand to ask a
clarification question, or performing tasks when directed by the teach-
er. Student off-task behavior was defined as the student not orient-
ing to the task or work when directed by the teacher. Such behaviors
might include a student looking in his desk or out the window, talking
with a peer about a nonrelated subject, putting his head on the desk,
or doing an unrelated activity. Also considered off task was disruptive
behavior, defined as behavior that distracts the flow of instruction and
the learning of other students (e.g., shouting, talking to other students
during teacher instructions, making noises, or throwing objects).
Independent Variable
An RtI framework was implemented to support teachers in
learning and implementing behavior strategies. Once the teachers
were taught the strategies, their use of them was evaluated. Teachers
advanced from tier to tier based on their progress, or the lack of prog-
ress, within the tiered framework of predetermined criteria.
Tier 1—School-wide training in behavior-specific praise. The Tier 1
529RtI AND TEACHER PRAISE
(primary) intervention was a one-time training session held during
a 30-minute faculty meeting. The researcher conducted this presen-
tation, which (a) defined general and behavior-specific contingent
praise for social and academic student behavior, (b) shared research
on the effectiveness of using high rates of BSP to increase students’
positive behavior, and (c) provided teachers opportunities to practice
verbalizing BSP statements. At the conclusion of the training, all were
encouraged to increase current personal BSP rates by 50%. This crite-
rion was selected by the primary author as a manageable goal for most
teachers to attain when asked to increase their frequency of praise. In-
stead of comparing them to an unknown, ambiguous standard, as the
literature is unclear what the standard should be for praise rate, the
author chose to encourage the participants to use their own baseline
as the standard from which to improve.
Tier 2—Video self-monitoring. Participants at the Tier 2 level video
recorded themselves teaching a lesson segment of at least 15 minutes
but no longer than 25 minutes. While watching the video, they self-
scored the data on BSP rates by counting the total number of BSPs
during a 15-minute teaching segment and sent the numerical data to
the experimenter via email.
Tier 3—Coaching. A coach, who was the first author, was imple-
mented at Tier 3 to provide non-evaluative support and to guide the
teacher through the problem-solving cycle. The coach sent emails
giving specific praise for data collection and improved rates of BSP
and also made personal visits providing encouragement and sharing/
discussing data. Additionally, a variety of interventions were offered
to the participants, including use of a MotivAider, a device that vi-
brates to signal fixed/intermittent time intervals, (Behavioral Dynam-
ics, 2010), continued use of the Kodak FLIP video camera, and op-
portunities to observe in other classrooms or to have the coach teach
a lesson segment in the participant’s classroom demonstrating a high
frequency of BSP.
All participants chose to use the MotivAider (Behavioral Dy-
namics, 2010) to prompt delivery of BSP to the target student. Par-
ticipants were encouraged to achieve a 50% increase in the average
frequency of BSP, individualized goals being calculated by multiply-
ing the frequency of praise of previous intervention conditions by 1.5.
Data Collection Procedures
Measurement. Data were collected on the dependent variables
by direct observation of both teachers and students. Teacher behav-
ior (BSP) was recorded using event recording with paper and pencil
during a standard 5-minute observation session. The researchers then
530 THOMPSON et al.
calculated and graphed the frequency with which BSP was delivered
by each teacher per observation session.
Student behavior was recorded by a paper/pencil momentary
time sampling at the end of each 10-second interval: a “+” sign for an
on-task interval and a “-“ sign for off-task behavior. As the research-
ers primarily focused on increasing specific teacher behavior, the mo-
mentary time sampling was sufficient to determine the potential im-
pact on student behavior. Data were monitored daily by the primary
researcher through emails to the observers and participants. Raw data
were recorded on an Excel spreadsheet, which created a numerical
sequential list and also generated a line graph.
Observers and observer training. Data were collected by the pri-
mary researcher and two paraeducators from a district intervention
team, who had 5 to 15 years of training in collecting teacher and stu-
dent behavior data. Both paraeducators were white females, one age
51 and the other age 40. Both spent most of their working day in gen-
eral education classrooms providing support as needed for students
with various learning and behavioral disabilities.
The primary researcher provided weekly training to the observ-
ers in data collection procedures for this particular study. Training
included mastering behavioral definitions, distinguishing between
examples and non-examples of written and video examples of behav-
ior, and practicing recording data. Trainees were required to obtain
100% accuracy.
Interobserver agreement (IOA) data were calculated on 31% of
the sessions across all experimental conditions. Data were compared,
and agreement was defined as two independent observers mark-
ing the same total number of tallies for teacher behavior “+” and “-”
marks for student behaviors. For teacher behavior (event recorded
behavior) agreement was calculated by dividing the lower total by
the higher total × 100. For student behavior (interval recorded behav-
ior), agreement was calculated using the formula: number of agree-
ments divided by the number of agreements plus disagreements X
100. If IOA dropped below 85%, the observers were retrained using
the training steps previously outlined, which only happened twice.
Average IOA for BSP was 100%, with a mean of 100% for both baseline
and intervention. Average IOA for student on-task behavior was 95%
(range 81–98%), with a mean of 95% for both baseline and interven-
tion. The overall IOA for teacher and student behaviors was 97.5%.
Experimental Design
This study used a multiple probe design across participants to
evaluate the effects of the independent variable.
531RtI AND TEACHER PRAISE
Baseline. During baseline conditions, with no systematic profes-
sional development addressing BSP, the frequency of teacher BSP and
student on-task behavior data were simultaneously collected by the
observers according to standard intervention team procedures. Data
were collected when the teacher was engaged in a 15-minute direct
teaching segment, at a specific time each day. If frequency of BSP was
< 2 per 15-minute observation period, the school was selected to con-
tinue with the study, including the school-wide faculty training on
BSP.
Tiers of intervention. Intervention was structured in three tiers,
comparable to those used in the response to intervention (RtI) ap-
proach.
Tier 1. The school-wide faculty training was the first condition
(primary tier) of intervention. Collection of data on teacher BSP and
student on-task behavior continued for all 12 principal-nominated
teachers at least three times per week by observer paraeducators
assigned to that classroom. As was mentioned participant selection
section, these data were collected for the purpose of providing be-
havioral support and selecting participants. One teacher from each of
the three schools was selected as a participant for the study based on
her frequency of BSP, availability, and willingness to participate for
the duration of the study. A participant whose frequency of BSP was
greater than or equal to a 50% increase from baseline would remain at
Tier 1 intervention; those who were below a 50% increase from base-
line moved to the Tier 2 intervention.
Tier 2. Tier 2 intervention conditions added a self-monitoring
process, whereby participants would use a Kodak FLIP video camera
to tape 15-minute lesson segments of their own teaching. Participants
collected BSP data from watching the video, recorded the total BSP
counts during a 15-minute segment, and sent the data to the primary
researcher via email at least three times per week. The researcher kept
the participant data while the intervention team continued to collect
data on teacher BSP and student on-task behavior. When the frequen-
cy of BSP observed by the research team were greater than or equal
to a 50% increase from Tier 1, the participant continued to use video
self-monitoring until at least three consecutive data points indicated
an increase of at least 50% from baseline. If the frequency of BSP from
the observer fell below a 50% increase from Tier 1 for two or more
data points, the participant moved to Tier 3.
Tier 3. Tier 3 intervention continued the video self-monitoring
process and added a coach—the primary researcher. She examined
the graphed data with the participant and asked the following reflec-
tive questions: What did you observe during video self-monitoring?
532 THOMPSON et al.
What did you notice about your data? What strategies for increasing
BSP are effective for you? What have you noticed about student be-
havior?
Observers continued to collect data. When the frequency of BSP
were greater than or equal to a 50% increase from Tier 2, coaching was
minimized to include only the researchers’ encouragement and be-
havior-specific praise to the participant at least three times per week.
If BSP from the observer dropped below a 50% increase from Tier 2,
the coach reintroduced video self-monitoring and increased personal
visits until three consecutive data points showed an increase in BSP
that was greater than or equal to a 50% increase from Tier 2.
Treatment Fidelity
To ensure proper treatment implementation, a checklist was de-
veloped for each condition of the study. Treatment fidelity was calcu-
lated as the total number of steps followed, divided by the total num-
ber of listed steps × 100. Data on treatment fidelity are reported below.
Tier 1. Treatment fidelity at Tier 1 included the researcher’s use
of a lesson plan to guide the delivery of instructional content during
faculty training and a checklist outlining the steps of the Tier 1 proce-
dure. The checklist was marked by the researcher prior to and follow-
ing the training and was integral to the Tier 1 data collection process.
As the sole provider of teacher training at Tier 1, the researcher self-
evaluated the fidelity of the treatment at 100%. No reliability measure
was included at this stage of the study.
Tier 2. Treatment fidelity during Tier 2 was ensured through
participants’ use of a checklist to monitor their implementation of in-
tervention procedures: (a) operating a Kodak Flip camera to record a
15-minute or longer teaching segment, (b) viewing the video record-
ing and tallying the BSP, (c) totaling the BSP and emailing the data to
the researcher the same day, and (d) implementing the video process
at least three times per week. The researcher independently recorded
on the checklist her responses to the teacher emails and the written
praise she provided. The primary researcher also made unscheduled
random visits to check the video camera and watched the recorded
contents to ensure recordings were taking place. Although reliabil-
ity data were not formally collected, the researcher’s written records
indicated the average percentage of intervention steps completed by
teachers was 88%.
Tier 3. Treatment fidelity at this level consisted of the researcher
recording personal participant visits on a coaching log. Participants
were asked to continue the video self-monitoring, which included
sending BSP data to the researcher via email. The permanent prod-
uct of the recorded teaching session served as an additional treatment
533RtI AND TEACHER PRAISE
fidelity check. The average percentage of steps completed across all
three interventions, including participant and researcher responsibili-
ties, was 92%.
Social Validity
Each participant completed a post-intervention questionnaire at
the conclusion of the study to evaluate perceptions about the utility,
effectiveness, and practicality of a tiered framework for professional
development and the use of BSP to manage disruptive students. All
participants received, filled out, and returned the questionnaire elec-
tronically, rating 10 items on a 6-point Likert-type scale. Eight ques-
tions had rating choices from strongly disagree to strongly agree. The
scale for the remaining two questions included almost never, almost
always, and not applicable as the options. Two participants also com-
pleted a section inviting comments. The researcher encouraged them
to be candid in their responses. The results are listed in Table 3.
Results
The overall results indicate limited change from baseline to the
Tier 1 (faculty training) intervention on increasing BSP, with teacher
behavior change increasing at the Tier 2 (video self-monitoring) and
Tier 3 (coaching) intervention levels (see Figure 1). Concurrently, stu-
dent on-task behavior, although highly variable, showed an increas-
ing trend as teachers increased their BSP.
Participants’ Behavior During Study Phases
Anna.
Baseline. Prior to Tier 1 intervention Anna gave no behavior-
specific praise across three observations–a zero trend with low, sta-
ble data. The student participant averaged 82% time on task during
baseline–a high level with moderate variability and a trend for slight
increase.
Tier 1. Following Tier 1 intervention Anna’s average frequency
of BSP was 0.2, and student on-task behavior averaged 64% with a
range of 40% to 78%. With BSP at 0 prior to Tier 1, Anna needed to
increase BSP to 1 or more in over three consecutive data points to cal-
culate a 50% increase. The criterion was not met to remain at Tier 1;
Anna moved to Tier 2 intervention.
Tier 2. BSP during Tier 2 averaged 1.1 per 5-min observation and
ranged from 0 to 5. Student on-task behavior averaged 61% with a
range of 38% to 79%. Although Anna’s BSP increased from 0.2, she
had consecutive data points with 0 BSP and thus was moved to Tier
3 intervention.
534 THOMPSON et al.
Q
ue
st
io
n
St
ro
ng
ly
d
is
ag
re
e
D
is
ag
re
e
So
m
ew
ha
t
d
is
ag
re
e
So
m
ew
ha
t
ag
re
e
A
gr
ee
St
ro
ng
ly
ag
re
e
Fa
cu
lt
y
tr
ai
ni
ng
a
d
eq
ua
te
to
in
cr
ea
se
B
SP
1
2
St
ud
en
t c
ha
ng
ed
b
eh
av
io
r
as
r
es
ul
t o
f i
nc
re
as
ed
B
SP
2
1
B
SP
is
a
n
eff
ec
ti
ve
in
te
rv
en
ti
on
3
In
cr
ea
si
ng
B
SP
is
fe
as
ib
le
a
nd
w
ill
im
pl
e
m
en
t
1
2
V
id
eo
s
el
f-
m
on
it
or
in
g
is
a
n
eff
ec
ti
ve
to
ol
fo
r
im
pr
ov
in
g
cl
as
sr
oo
m
m
an
ag
em
en
t
2
1
U
si
ng
a
c
ol
la
bo
ra
ti
ve
c
oa
ch
is
a
n
eff
ec
ti
ve
to
ol
fo
r
te
ac
he
r
im
pr
ov
em
en
t
3
P
ro
fe
ss
io
na
l d
ev
el
op
m
en
t i
s
m
or
e
eff
ec
ti
ve
w
he
n
it
a
d
d
re
ss
es
in
d
iv
id
ua
l n
ee
d
s
of
e
ac
h
te
ac
he
r
2
1
P
ro
fe
ss
io
na
l d
ev
el
op
m
en
t i
s
m
or
e
eff
ec
ti
ve
w
he
n
it
a
d
d
re
ss
es
ne
ed
s
of
fa
cu
lt
y
as
w
ho
le
1
1
1
A
lm
os
t
ne
ve
r
O
nc
e
in
a
w
hi
le
So
m
et
im
es
Fr
eq
ue
nt
ly
A
lm
os
t
al
w
ay
s
N
ot
ap
pl
ic
ab
le
H
ow
o
ft
en
w
ill
y
ou
u
se
v
id
eo
s
el
f-
m
on
it
or
in
g
3
H
ow
o
ft
en
w
ill
y
ou
a
sk
fo
r
a
co
lla
bo
ra
ti
ve
c
oa
ch
to
im
pr
ov
e
yo
ur
te
ac
hi
ng
?
1
1
1
Ta
b
le
3
S
oc
ia
l V
al
id
it
y
Q
u
es
ti
on
n
ai
re
R
es
u
lt
s
535RtI AND TEACHER PRAISE
Figure 1. Effects of Tiered Intervention on BSP Rates and Student Time On-
Task
536 THOMPSON et al.
Tier 3. Anna averaged 2.6 BSP with a range of 0 to 7 across 12
observations during Tier 3 intervention. Student behavior averaged
68% time on-task with a range of 38% to 89%. The corresponding BSP
for the lowest student on-task percentage was 0; the day of the high-
est student on-task behavior percentage, the frequency of BSP was 3.
Gail.
Baseline. Prior to intervention Gail had no BSP over seven data
collection points. The student’s on-task behavior averaged 44%.
Tier 1. After Tier 1 intervention Gail gave no BSP five out of sev-
en days, averaging 0.3 BSP per 15 minutes for Tier 1 condition. Due
to consistent data points with no BSP, she was moved to Tier 2. The
student’s average time on task was 41% with a range of 17% to 67%.
Tier 2. During Tier 2 Gail’s average frequency of BSP was 8.6
over 11 observations with a range of 3 to 13 BSP per 5-minute obser-
vation. Student on-task behavior averaged 62% and a range of 14% to
91%. Because Gail consistently maintained a frequency of BSP above
the 50% improvement over Tier 1 rates at a high stable level with a
rapidly increasing trend, she remained at Tier 2 and faded use of the
video camera for the last three observation periods.
Jane.
Baseline. Prior to the faculty training on BSP, Jane’s average fre-
quency of BSP was .44 per 5 minutes over nine observations. Average
student time on-task was 36% with a range of 2% to 74%.
Tier 1. After Tier 1 intervention Jane’s BSP rate per 5 minutes
was 1.14 over seven observations with a range of 0 to 3. Student time
on-task averaged 76% with a range of 49% to 82%. Although Jane in-
creased rates of BSP above 50% over baseline data, she was moved to
Tier 2 because her BSP rates remained at 1 over five consecutive data
points.
Tier 2. Jane increased the frequency of BSP to an average of 2.13
during Tier 2 intervention with a range of 0 to 3. Student average time
on-task during Tier 2 was 57% with a range of 32% to 92%. Although
the data showed a higher level of BSP, Jane’s BSP frequency stayed
consistent, with no increase over eight observations; therefore she was
moved to Tier 3 to encourage increased BSP. During treatment fidelity
checks the researcher discovered that Jane was not consistently video-
taping her lessons, but she did so after resolving equipment concerns.
Tier 3. The average frequency of BSP during Tier 3 was 5.2 per
5 minutes with a range of 3 to 9. Student time on-task averaged 62%
with a range of 39% to 87.
537RtI AND TEACHER PRAISE
Social Validity
The results of the social validity questionnaire are presented
in the following paragraph as well as Table 3. The three participants
agreed with all of the statements but one. In response to the statement
“professional development is more effective when it addresses needs
of the faculty as a whole,” one disagreed, one somewhat agreed, and
the other agreed. All participants strongly agreed that BSP is an ef-
fective, feasible intervention to increase desired student behavior and
that collaborative coaching is an effective tool for teacher improve-
ment. All three indicated they would use self-monitoring sometimes,
but they varied in their response to how often they would ask for as-
sistance from a collaborative coach. One responded frequently, one
responded sometimes and the third responded once in a while.
Additional comments reiterated acceptance and confirmed ef-
ficacy of increasing BSP to improve student behavior:
• The BSP that I did on my class this year made a huge difference
in the attitudes of my students. It didn’t solve every problem, but
it really had a strong impact on the behavior of the whole class as
well as the targeted student. The downside to this was the timing.
This is something that could have been implemented in the fall and
saved lots of wasted time just with management. I will definitely
incorporate this along with a few other things at the beginning
of the year next year. I think it is really easy to get stuck in the
habit of acknowledging the negative behaviors and overlook the
positive behaviors of the students. I know that I didn’t realize
this until I started focusing on the positive behaviors. . . . Now
that I am comfortable with implementing BSP in my class, it isn’t
difficult or frustrating at all. . . . Overall I learned some great ways
to change the behaviors of the class and make my classroom a
more positive environment.
• To be honest it made me very nervous to have other professionals
observing me, but I learned through the process the value of
praising specific behavior. I learned that it takes practice to see
the behavior and then to give praise for the behavior. I know that
I have improved on “seeing” the desired behavior and giving
praise, and I will continue to improve this teaching technique.
Discussion
The main focus of this study was to examine the effects of tiered
intervention on teachers’ acquisition of a specific skill: Specifically,
we examined teacher response to individualized professional devel-
opment in respect to increasing BSP. Secondly, student response to
538 THOMPSON et al.
teacher praise and teacher perceptions of the research efforts were
evaluated. This study expands the findings of the Myers et al. study,
as well as the research on effective professional development using
video self-monitoring and coaching.
Baseline data on BSP rates revealed that teachers gave little to no
behavior-specific praise statements, especially directed towards stu-
dents they identified as disruptive. These findings are consistent with
research on teacher-student interactions with little positive feedback
or praise for appropriate conduct (Brophy 1981; Beaman & Wehldall,
2000; Sutherland et al., 2000). The baseline condition was followed by
Tier 1 intervention—the school-wide faculty training on BSP. After
this training, teachers were challenged to increase their frequency of
BSP by 50%. (Those with a rate of 0 were challenged to increase their
rate by 100%). All teachers at the faculty meetings, including the par-
ticipants, committed verbally to do so for the school year.
Results indicated that participants’ BSP did not increase con-
sistently following the faculty training. During the Tier 1 condition
two participants made slight improvements but rapidly returned to
baseline, showing only slight effects between the independent and
dependent variable. These results agree with research demonstrat-
ing that a one-time delivery of information is largely inadequate to
change teacher behavior (Billingsley, 2005; Garet, Porter et al., 2001;
Garet, Wayne et al., 2010; Guskey & Yoon, 2009). Similarly, these re-
sults extend comparable findings that a one-session faculty training
so often used in school districts does not yield significant change in
teacher behavior (Elmore, 2002; Fixsen et al., 2005; Myers et al., 2011;
Sprick et al., 2006).
In contrast, when visual feedback (via video self-monitoring)
was added during Tier 2, the frequency of BSP increased for all partic-
ipants, especially for Gail, who examined her teaching and increased
her BSP enough to require no additional support. Similarly, Anna re-
ported, “I had no idea I said [a specific word] over and over as I teach.
I need to change that right away.” Jane mentioned that she didn’t real-
ize she was favoring one side of her classroom; thus she made an effort
to turn toward the students on the other side. These results support
the use of video-taping in helping teachers notice classroom interac-
tions as they develop effective teaching skills both as pre-service and
as in-service teachers (Hennessy & Deaney, 2009; Hitchcock, Dowrick
& Prater, 2003; Sherin & van Es, 2005). In addition to self-monitoring,
coaching was folded into the three-tiered, professional development
program to provide individualized support based on teacher need
and personal choice. This decision to use coaching is supported by the
body of educational professional development research which sug-
gests that adult learning is more effective when it is contextual, ongo-
539RtI AND TEACHER PRAISE
ing, and classroom specific (Ackerman, 2008; Knight, 2009; Shidler,
2009; Oncharwi & Keengwe, 2008; Sprick et al., 2006).
As Anna and Jane received visits from the coach, their frequency
of BSP increased. Jane received regular personal visits; however Anna
did not due to her absences during scheduled visits and the work
schedule of the coach. On days of no personal visits the coach con-
tacted Anna by email. The frequency of BSP dropped on days of email
correspondence and increased on days of personal visits, indicating
the need for follow-up and accountability measures for teachers who
do not respond to lower levels of support (Capizzi, Wehby, & Sand-
mel, 2010; Hennessy & Deaney, 2009).
The coaching dynamics of this study highlight the difference be-
tween voluntary and assigned collaboration (Onchwari & Keengwe,
2008; Sprick et al., 2006). Teacher resistance was minimal, yet underly-
ing defensiveness was evident during initial meetings with the par-
ticipants. They may have felt that the principal was questioning their
abilities, and resulting feelings of inadequacy could have impaired
their teachability and their learning.
The ultimate purpose of informing teacher change is to impact
student learning. In this study, student on-task behavior was also re-
corded in an effort to evaluate the possible effects of increasing BSP.
Data for student time on task indicate similar patterns when viewed si-
multaneously with BSP rates: When the teacher praise rate was highly
variable, the student on-task behavior was highly variable. Likewise,
when the teacher praise was consistent and demonstrated a trend to
increase, the student on-task behavior was steady and at a high level.
Similar patterns in teacher-student data points may indicate a correla-
tion between increased BSP and increased student on-task behavior,
which supports findings from Sutherland et al. (2000) indicating that
increased teacher praise results in increased student task engagement.
The study measured social validity to ascertain teacher percep-
tions of a responsive tiered framework of professional development.
Teachers concurred that an individualized approach to professional
development is more effective than a general whole-group approach.
Additionally, all participants intended to continue self-monitoring to
inform their practice. Participants responded differently in asking for
the assistance of a collaborative coach; this finding further validates
the importance of considering individual preferences and needs in
teacher training (Myers et al., 2011).
Limitations and Future Research
The purpose of this study was to examine the relationship
between the frequency of behavior-specific praise and a tiered
540 THOMPSON et al.
intervention approach to professional development. Student on-task
behavior was recorded secondary to this primary construct. Although
student behavior did appear to follow similar patterns of teacher be-
havior, any implied relationship should be viewed with caution. Fu-
ture research should systematically examine the causal relationship
between teacher behavior and student behavior.
The participants were selected from a pool of teachers identified
by their principal as needing assistance with difficult students. The
participant attitudes were sometimes hesitant or even resistant, possi-
bly affected by this selection process; however, they were cooperative,
especially after the researcher showed interest in classroom activities
and gave sincere, positive feedback on their interactions with students
and their good teaching practices. Further research should consider
implementation with teachers who may be more resistant to improv-
ing their classroom management skills.
Motivation to participate in interventions is an important part
of coaching literature (Sprick et al., 2006). Nomination by their prin-
cipals may have caused external rather than internal motivation for
these teachers to participate. Sprick and colleagues maintain that if
coaching or collaborative consultation between practitioners is to be
optimally effective, it needs to be voluntary. Future research should
broaden the scope by inviting all teachers in a school, to participate in
a tiered professional development approach.
The researcher and observers were not part of the school faculty,
which may have positively or negatively impacted teacher behavior.
Guskey and Yoon (2009) maintained that outside experts can positive-
ly affect teacher improvement only as time is allotted for follow-up,
demonstration and problem-solving activities. As ongoing profes-
sional development from outside sources is not financially feasible,
studies should consider implementing this type of professional devel-
opment using the existing training structures of the school or school
district (e.g., district specialists, mentor teachers, school psycholo-
gists, and school principals).
In single-subject research studies, treatment fidelity is crucial in
establishing functional relationships between the dependent and in-
dependent variables (Horner, Carr, & Halle, 2005). Inconsistencies in
treatment fidelity were encountered during this study as well as in the
Myers et al. (2011) study on which it was based. Frequent monitoring
is key (Ardoin, 2006; Barnett et al., 2004). As the sole monitor of treat-
ment fidelity, the researcher in this study discovered that two of the
three participants were not consistently following listed procedures
for Tier 2 and Tier 3 interventions. Simply asking the participants to
sign the coaching log during each visit or to sign the treatment fidelity
541RtI AND TEACHER PRAISE
checklist to verify observance of the steps may increase fidelity of
treatment. Future research should plan for an objective treatment fi-
delity measure, including inter-observer reliability, for use during the
performance feedback intervention (Tier 2 in this study).
As with most single-subject research (Horner et al., 2005; Taw-
ney & Gast, 1984), this study was necessarily conducted on a small
scale limited to elementary-level teachers. A limited sample size,
along with a participant pool including only white, female, middle-
aged teachers affected the generalizability of the results. Replication
of this study across grade levels and participant characteristics (e,g.,
gender, ethnicity, years of experience) may increase the external va-
lidity of the findings (Myers et al., 2011).
Another limitation of this study was lack of a maintenance
phase. While the frequency of BSP did show an increasing trend and
high stability for one participant, a maintenance phase of the study
was not possible because the school year was almost over. Data from
two of the three participants revealed that follow-up visits increased
BSP, and without visits or contact to monitor teacher behavior, BSP
returned to lower frequencies. Although Myers and colleagues (2011)
included a maintenance phase during their similar study, they also
found that without follow-up or monitoring, the frequency of BSP
decreased. Future research should include a fade and maintenance
phase to ensure skill acquisition (Myers et al., 2011).
Implications for Practice
Guskey and Yoon (2009) affirmed the importance of translating
professional development into improved student outcomes and the
necessity for its thoughtful planning. Methods taught should be in-
dividually responsive and continuous. Using a tiered continuum of
ongoing teacher support accompanied by increased feedback has the
potential to be individually responsive in its support of teacher skill
acquisition, with critical follow-up embedded within the model struc-
ture (Guskey & Yoon, 2009).
This study confirms results from similar studies indicating that
video self-monitoring, the second tier of the model, provides an ac-
curate permanent product with a data set that meaningfully informs
instruction, especially when accompanied by consultation from a
mentor (Capizzi et al., 2010; Myers et al., 2011; Sherin & van Es, 2005).
Performance feedback from video analysis allows teachers to think
critically about their instruction and its effects on student achieve-
ment, making it a viable strategy to improve instructional practice
(Capizzi et al., 2010; Colvin et al., 2009; Hitchcock, Dowrick & Prater,
2003). Large-scale implementation may include access to equipment
542 THOMPSON et al.
for every classroom in a school. Group analysis of the recorded teach-
ing segments, similar to a study by Sherin and van Es (2005), could be
considered as an activity for professional learning communities.
One final implication from this study is that although research
findings indicate that increasing praise results in increasing students’
time on task and decreasing their disruptive behavior (Cherne, 2009;
Sutherland et al., 2000), consensus on a prescribed number of praises
per minute has not been reached by researchers and practitioners.
Both Sutherland et al. (2000) and Myers et al. (2011) used six praise
statements per 15-minute teaching segment as a standard for effective
practice (Sutherland et al., 2000). This study considered the individual
performance of the teacher and examined whether an incremental in-
crease (50%<) from pre-intervention BPS rates and subsequent aver-
ages of each tier would affect student behavior. Preliminary outcomes
demonstrated a relationship between teacher praise and student be-
havior; however, a specific prescribed praise rate cannot be specifi-
cally designated from the outcomes of this study. One implication is
that praise is a low-cost, efficient, and effective strategy for teachers to
promote positive student behavior. It requires minimal professional
development effort and resources. Researchers should further exam-
ine the difference between using a predetermined number of praises
per minute and requesting a percentage increase in determining at
what point the student behavior is affected.
Conclusion
This study demonstrated that three elementary general educa-
tors increased their frequency of behavior-specific praise when pro-
vided with a continuum of performance feedback support that in-
creased in intensity based on need. Thus a functional relationship was
established between the independent and dependent variables. The
increasing need for effective teachers intensifies the need for effective
professional development systems. Methods of teacher training that
provide a continuum of ongoing support, embedded evaluation, and
follow-up are most effective. Researchers should continue to exam-
ine the effectiveness of providing a continuum of interventions to im-
prove teacher skills in order to achieve the ultimate goal of improving
student outcomes in academic, social, and behavioral areas.
References
Ackerman, D. J. (2008). Coaching as part of a pilot quality rating scale
initiative: Challenges to—and supports for—the change-
making process. Early Childhood Research & Practice, 10(2). Re-
trieved from http://ecrp.uiuc.edu/.
543RtI AND TEACHER PRAISE
Ardoin, S. P. (2006). The response in response to intervention: Evalu-
ating the utility of assessing maintenance of intervention ef-
fects. Psychology in the Schools, 43(6), 713–725.
Barnett, D. W., Daly, E. J., III, Jones, K. M., & Lentz, F. E., Jr. (2004).
Response to intervention: Empirically based special service
decisions from single-case designs of increasing and decreas-
ing intensity. Journal of Special Education, 38(2), 66–79.
Beaman, R., & Wheldall, K. (2000). Teachers’ use of approval and
disapproval in the classroom. Educational Psychology, 20(4),
431–446.
Billingsley, B. S. (2005). Cultivating and keeping committed special educa-
tion teachers: What principals and district leaders can do. Thou-
sand Oaks, CA: Corwin Press.
Brophy, J. E. (1981). Teacher praise: A functional analysis. Review of
Educational Research, 51, 5–32.
Burnett, P. C. (2002). Teacher praise and feedback and students’ per-
ceptions of the classroom environment. Educational Psychol-
ogy, 22, 5–15.
Capizzi, A. M., Wehby, J. H., & Sandmel, K. N. (2010). Enhancing
mentoring of teacher candidates through consultative feed-
back and self-evaluation of instructional delivery. Teacher
Education and Special Education, 33(3), 191–212.
Cherne, J. (2009). Effects of praise on student behavior in the classroom
(Doctoral dissertation). Retrieved October 17, 2010, from Dis-
sertations & Theses: Full Text. (Publication No. AAT 3328300)
Colvin, G., Flannery, K.B., Sugai, G., & Monegan, J. (2009). Using ob-
servational data to provide performance feedback to teachers:
A high school case study. Preventing School Failure, 53, 95–104.
Coyne, M. D., Kame’enui, E. J., & Carnine, D. W. (2007). Effective teach-
ing strategies that accommodate diverse learners. Upper Saddle
River, NJ: Pearson Education.
Elmore, R. (2002). Bridging the gap between standards and achievement.
Washington, DC: The Albert Shanker Institute.
Ferguson, E., & Houghton, S. (1992). The effects of contingent teach-
er praise, as specified by Canter’s assertive discipline pro-
gramme, on children’s on-task behaviour. Educational Studies,
18, 83–93.
Fixsen, D. L., Naoom, S. F., Blasé, K. A., Friedman, R. M., & Wallace,
F. (2005). Implementation research: A synthesis of the literature.
Tampa, FL: University of South Florida, Louis De la Parte
544 THOMPSON et al.
Florida Mental Health Institute, The National Implementa-
tion Research Network. (FMHI Publication #231)
Fuchs, D., & Fuchs, L. S. (2006). Introduction to response to interven-
tion: What, why, and how valid is it? Reading Research Quar-
terly, 41, 93–99.
Garet, M. S., Porter, A. C., Desimone, L., Birman, B. F., & Yoon, K. S.
(2001). What makes professional development effective? Re-
sults from a national sample of teachers. American Educational
Research Journal, 38(4), 915–945.
Garet, M., Wayne, A., Stancavage, F., Taylor, J., Walters, K., Song,
M., . . .& Doolittle, F. (2010). Middle school mathematics profes-
sional development impact study: Findings after the first year of
implementation (NCEE 2010-4009). Washington, DC: National
Center for Education Evaluation and Regional Assistance, In-
stitute of Education Sciences, U.S. Department of Education.
Gresham, F. M. (2005). Response to intervention: An alternative means
of identifying students as emotionally disturbed. Education
and Treatment of Children, 28, 328–344.
Guskey, T. R., & Yoon, K. S. (2009). What works in professional devel-
opment? Phi Delta Kappan, 90(7), 495–500.
Hennessy, S., & Deaney, R. (2009). The impact of collaborative vid-
eo analysis by practitioners and researchers upon peda-
gogical thinking and practice: A follow-up study. Teach-
ers and Teaching: Theory and Practice, 15(5), 617–638. doi:
10.1080/1350600903139621
Hitchcock, C. H., Dowrick, P. W., & Prater, M. A. (2003). Video self-
modeling intervention in school-based setting: A review. Re-
medial and Special Education, 24(1), 36–45.
Horner, R. H., Carr, E. G., & Halle, J. (2005). The use of single-subject
research to identify evidence-based practice in special educa-
tion. Exceptional Children, 71(2), 165–179.
Joyce, B., & Showers, B. (1995). Student achievement through staff devel-
opment: Fundamentals of school renewal. New York, NY: Long-
man.
Kalis, T. M., Vannest, K. J., & Parker, R. (2007). Praise counts: Using
self-monitoring to increase effective teaching practices. Pre-
venting School Failure, 51(3), 20–27.
Knight, J. (2009). Coaching: The key to translating research into prac-
tice lies in continuous, job-embedded learning with ongoing
support. Journal of Staff Development, 30(1), 18–20.
545RtI AND TEACHER PRAISE
MotivAider [Apparatus]. (2010). Thief River Falls, MN: Behavioral
Dynamics, Inc. Information available at http://habitchange.
com/motivaider.php
Myers, D. M., Simonsen, B., & Sugai, G. (2011). Increasing teachers’
use of praise with a response-to-intervention approach. Edu-
cation and Treatment of Children, 34(1), 36–59.
No Child Left Behind Act of 2001, 20 U.S.C. § 6319 (2008).
Noell, G. H., Witt, J. C., Slider, N. J., Connell, J. E., Gatti, S. L., Wil-
liams, K. L., & Resetar, J. L. (2005). Treatment implementation
following behavior consultation in schools: A comparison
study of three follow-up strategies. School Psychology Review,
34(1), 87–106.
Onchwari, G., & Keengwe, J. (2008). The impact of a mentor-coaching
model on teacher professional development. Early Childhood
Education Journal, 36, 19–24. doi:101007/s10643-007-0233-0
Peterson, D. S., Taylor, B. M., Burnham, B., & Schock, R. (2009). Reflec-
tive Coaching Conversations: A Missing Piece. The Reading
Teacher 62(6). 500-509.
Sherin, M. G., & van Es, E. (2005). Using video to support teachers’
ability to notice classroom interactions. Journal of Technology
and Teacher Education, 13(3), 475–491.
Shidler, L. (2009). The impact of time spent coaching for teacher ef-
ficacy on student achievement. Early Childhood Education Jour-
nal, 36(5), 453–460.
Simonsen, B., Fairbanks, S., Briesch, A., Myers, D., & Sugai, G. (2008).
Evidence-based practices in classroom management: Consid-
erations for research to practice. Education and Treatment of
Children, 31(3), 351–380.
Sprick, R., Knight, J., Reinke, W., & McKale, T. (2006). Coaching class-
room management: Strategies and tools for administrators and
coaches. Eugene, OR: Pacific Northwest Publishing.
Stichter, J. P., Lewis, T. J., Richter, M., Johnson, N. W., & Bradley, L.
(2006). Assessing antecedent variables: The effects of instruc-
tional variables on student outcomes through in-service and
peer coaching professional development models. Education
and Treatment of Children, 29(4), 665–692.
Sugai, G. (2007). Promoting behavioral competence in schools: A com-
mentary on exemplary practices. Psychology in the Schools,
44(1), 113-118. doi: 10.1002/pits.20210
Sutherland, K. S., Wehby, J. H., & Copeland, S. R. (2000). Effect of
546 THOMPSON et al.
varying rates of behavior-specific praise on the on-task be-
havior of students with emotional and behavioral disorders.
Journal of Emotional and Behavioral Disorders, 8, 2–8.
Tawney, J. W., & Gast, D. L., (1984). Single subject research in special
education. Columbus, OH: Charles E. Merrill Publishing Com-
pany. doi:10.1177/109830079900100201
Yoon, K. S., Duncan, T., Lee, S. W., Scarloss, B., & Shapley, K. (2007).
Reviewing the evidence on how teacher professional development af-
fects student achievement (Issues & Answers Report, REL 2007–
No. 033). Washington, DC: U.S. Department of Education,
Institute of Education Sciences, National Centerfor Education
Evaluation and Regional Assistance, Regional Educational
Laboratory Southwest. Retrieved from http://ies.ed.gov/ncee/
edlab.
27
A Comparison of Three Types
of Opportunities to Respond
on Student Academic and Social Behaviors
Todd Haydon
University of Cincinnati, Ohio
Maureen A. Conroy
Virginia Commonwealth University, Richmond
Terrance M. Scott
University of Louisville, Kentucky
Paul T. Sindelar
Brian R. Barber
Ann-Marie Orlando
University of Florida, Gainesville
An alternating treatments design was used to investigate the effects of three types of opportunities to respond (i.e., indi-
vidual, choral, and mixed responding) on sight words and syllable practice in six elementary students with behavioral
problems. During the mixed responding condition, five out of six students demonstrated a lower rate of disruptive behavior,
and four out of six students had fewer intervals of off-task behavior. Results of the three types of opportunities to respond
on participants’ active student responding were less clear. A discussion of limitations, implications, and future research
directions is included.
Keywords: individual responding; choral responding; active student responding; disruptive behavior
Fletcher, & Hennington, 1996). Researchers have shown
that improving the quality and increasing the quantity of
learning trials results in higher learning rates (Barbetta &
Heward, 1993; Carnine, 1976; Miller, Hall, & Heward,
1995). An example of a learning trial is when a teacher
presents a science word on a flash card (i.e., stimulus), the
student recites the word aloud (i.e., response), and the
teacher then says, “Good answer” (i.e., consequent)
(Skinner, Belfiore, Mace, William-Wilson, & Johns,
1997). Researchers have shown that increasing the num-
ber of learning trials could increase learning levels during
the acquisition, fluency building, and maintenance stages
of learning (Skinner, Smith, & McLean, 1994).
Using choral responding is one instructional strategy
that increases both learning trial rates and learning rates
during teacher-led instruction (Skinner et al., 1996). Choral
Journal of Emotional and
Behavioral Disorders
Volume 18 Number
1
March 2010 27-4
0
© 2010 Hammill Institute on
Disabilities
10.1177/1063426609333448
http://jebd.sagepub.com
hosted at
http://online.sagepub.com
Teachers in general education classrooms typically use the lecture format during large group instruction
and expect that their students passively watch and listen
while course content is presented. The common ques-
tioning procedure used with this style of instruction is
asking individual students to volunteer by raising their
hands (Armendariz & Umbreit, 1999). However, a limi-
tation of this instructional method is that only a handful
of students, usually higher achievers, actively respond to
teachers’ questions (Greenwood, 2001; Greenwood,
Delquadri, & Hall, 1984). In the past, Good (1970) found
that students, in particular students who are low achiev-
ers, were not provided equal opportunities to respond
and frequently passively watched and listened as their
higher achieving peers answered questions. As a result,
low achieving students may often fail to receive the prac-
tice and feedback that is necessary for achievement gains.
To increase teacher rates of opportunities to respond,
researchers have theorized and conceptualized instruction
as having a basic unit of instruction called a learning
trial. A learning trial consists of a three-term, stimulus-
response-consequent contingency sequence (Skinner,
Authors’ Note: This research and preparation of this article have
been supported in part by an OSEP doctoral leadership grant. Please
address correspondence to Todd Haydon, ML 0022, 600F Teachers/
Dyer Hall, CECH, University of Cincinnati, Cincinnati, OH 45221-0022;
e-mail: todd.haydon@uc.edu.
28 Journal of Emotional and Behavioral Disorders
responding occurs when all students are asked to respond
following the presentation of an instructional stimulus
(Heward, 1994). The purpose of using choral responding
is to increase the number of active student responses and,
as a result, increase the number of correct responses and the
amount of time students are engaged during instruction
while allowing the teacher to monitor each student’s under-
standing of each question (Carnine, 1976; McKenzie &
Henry, 1979; Miller et al., 1995; Sainato, Strain, & Lyon,
1987; Sutherland, Alder, & Gunter, 2003).
Providing students frequent opportunities to respond is
important because researchers suggest that increased stu-
dent responding is linked to on-task behavior and engage-
ment during instruction (Carnine, 1976; Sainato et al.,
1987; Sutherland et al., 2003). When students are engaged
and actively responding to questions, teachers can focus on
academic content rather than being concerned with inap-
propriate student behaviors. Increasing the focus on
academic content is particularly important for teachers
who instruct students with or at risk for emotional or
behavioral disorders (EBD), because students with or at
risk for EBD are more likely to engage in inappropriate
behaviors than their typically achieving peers (Hastings
& Oakford, 2003; Nelson & Roberts, 2002).
In a study designed to verify various effective instruc-
tional techniques, Anderson, Evertson, and Brophy (1979)
found that during small group first-grade reading groups,
choral responding was negatively related to achievement
and individual responding (ordered turns) was positively
related to achievement. However, McKenzie and Henry
(1979) compared an individually addressed question
condition with a unison hand-raising condition in two
third-grade classrooms and found that students in the
unison hand-raising condition had significantly fewer
intervals of off-task behavior than students in the indi-
vidual hand-raising condition. Results from several more
recent studies indicate that increased rates of opportuni-
ties to respond by using choral responding produced a
higher percentage of intervals of on-task behavior.
Sutherland et al. (2003) demonstrated that when a teacher
increased his rate of opportunities to respond and used
choral responding, nine students identified as EBD in a
self-contained classroom had more correct responses,
fewer disruptions, and increased on-task behavior during
math lessons. Haydon, Mancil, and Van Loan (in press)
systematically replicated the Sutherland et al. (2003)
study. Similar to the Sutherland et al. study, when the
teacher increased his rate of opportunities to respond and
used choral responding in a general education classroom
setting, a fifth-grade student identified as at risk for EBD
had higher percentages of on-task behavior and correct
responses and lower rates of disruptive behavior than
during the individual responding condition.
Two more studies lend support for the advantage of
using choral responding over individual responding.
Sainato et al. (1987) compared choral responding with a
baseline individual responding condition with three pre-
school children identified as having significant behavioral
and developmental delays during morning circle time.
Sainato and colleagues compared the use of two rates
(three/min; five/min) of choral responding with a baseline
individual responding condition and results indicated that
on-task behavior and correct responding improved during
the higher rate of choral responding. In a similar investiga-
tion, Sindelar, Bursuck, and Halle (1986) compared two
modes of responding: ordered and choral. Findings sug-
gested a slight but significant difference between sight
words mastered across all three groups of students during
the choral responding condition in comparison with the
ordered response condition. On a post-instruction test, the
students in the choral responding condition had a higher
percentage of words read correctly than the students in the
ordered responding condition. There was not a substantial
difference in the percentage of on-task behavior between
conditions (83% for the choral responding condition and
79% for the ordered responding condition).
However, Wolery, Ault, Doyle, Gast, and Griffin (1992)
compared choral versus individual responding in small
group arrangements and had contrasting findings to earlier
findings. Results on the effectiveness of the two types of
responding differed depending on the amount of opportu-
nities to respond provided and student exposure to ques-
tions during each condition. The authors concluded that
the two types of responding produced relatively equal
learning and only a slight difference in effectiveness and
efficiency (choral over individual responding) were found.
However, these results could be due to a small group set-
ting and may not replicate in a large group classroom set-
ting where students may typically receive fewer individual
opportunities to respond and be required to passively lis-
ten for longer periods of time.
Whereas researchers have compared choral and indi-
vidual responding, Stevens and Rosenshine (1981) sug-
gested that using mixed responding (a ratio of 70:30
choral to individual responding) might be a more effec-
tive and efficient instructional strategy. They hypothe-
sized that students could benefit from frequent practice
of choral responding, whereas teachers could test specific
children and gain information on individual performance
by using individual responding. In classroom-based
research, it is important to determine the most efficient
method of instruction to increase the likelihood that
Haydon et al. / Types of Opportunities to Respond 29
teachers will use that strategy in the future. Comparing
types of instructional strategies is one way to determine
which instructional strategies produce the best results
(Skinner, Johnson, Larkin, Lessley, & Glowacki, 1995).
This study addresses this issue and extends the learning
trial literature in several ways. First, the effectiveness of
decreasing students’ disruptive and off-task behavior as well
as increasing students’ active student responding was exam-
ined by comparing three types of opportunities to respond
(individual, choral, and a mixture of 70% choral responding
and 30% individual responding) in a second-grade general
education classroom setting. Second, the three types of
opportunities to respond represented the use of an antecedent
instructional strategy in the beginning of a learning trial as
opposed to an error correction strategy at the end of a learning
trial. Third, the three types of opportunities to respond were
used with students identified as at risk for EBD.
The purpose of this study was to investigate the follow-
ing research question: What effects do choral responding,
individual responding, and a mixture of choral and indi-
vidual responding procedures have on the disruptive, off-
task behavior, and active student responding of students
identified as high risk for EBD during group instruction in
a general education classroom?
Method
Participants
To recruit participants, the first author contacted assis-
tant principals at elementary schools in a southern school
district to determine if the school was interested in partici-
pating in the study and if they had any second-grade teach-
ers who would be interested in being participants. Two
schools in the school district and three second-grade
teachers from each school volunteered to participate.
Teachers. Six teachers were recruited and served as
participants in this study. Teacher participants (a) had a
minimum of 2 years of teaching experience, (b) used less
than two opportunities to respond per minute during a
pre-assessment condition, and (c) consented to partici-
pate in the study. All six teachers were Caucasian, and
five of the six teachers were female. The average years
of teaching experience was 3.0 (range = 2–6 years), and
all six teachers had taken a behavior management class
as undergraduate students.
Students. Six students identified as having chronic
disruptive behaviors that placed them at risk for EBD
participated in this study. Table 1 reports information
on each participant’s gender, ethnicity, age, and risk
score. To determine if the student participants were at
risk for EBD, the Systematic Screening for Behavior
Disorders (SSBD; Walker & Severson, 1993) was con-
ducted. The SSBD is an empirically validated, multiple
gating procedure used to identify students with pat-
terns of internalizing and externalizing behaviors (Walker
& Severson, 1993). Identification occurs in three stages:
(a) teacher nomination of students with externalizing
or internalizing behavior problems, (b) ranking and
evaluation of the top three students in either category,
and (c) observation of these students by another profes-
sional in the classroom and playground setting.
Stage 1 of the SSBD was one of the regular practices
used by the school to identify students at elevated risk
for EBD. Thus, a separate consent was not needed for
ranking students in the Stage 1 process. Following Stage 1,
the first author obtained consent from the parents of the
students who were ranked highest in the Stage 1 process
in each class. Informed consent granted permission to
implement the second and third stages of the SSBD
(Walker & Severson, 1993) and participate in the study
if scores indicated that the student was at high risk. Once
informed consent was obtained, Stage 2 of the SSBD
was conducted. All participants targeted for Stage 2 met
criteria for inclusion in the study.
The following eligibility criteria were used to identify
participants: They (a) were rated by the teachers as hav-
ing high rates of disruptive behavior for more than 1
month according to the critical events index and com-
bined frequency index on the SSBD, (b) were enrolled in
a second-grade general education class, (c) were between
the ages of 7 and 8, and (d) had parental consent to par-
ticipate in the study.
Setting and Materials
Setting. The setting for the study was six second-grade
general education classrooms in the south. Two schools,
Table 1
Participant Characteristics
SSBD
Name Gender Ethnicity Age Score
Frank male African American 7 years 6 months 25/35
D’Andy male African American 8 years 2 months 31/35
Monty male African American 7 years 5 months 29/35
Teo male African American 8 years 2 months 32/35
Amber female African American 8 years 2 months 30/35
Mats male Caucasian 7 years 6 months 27/35
Note: SSBD = Systematic Screening for Behavior Disorders.
30 Journal of Emotional and Behavioral Disorders
one urban and the other suburban, were selected. Class
size ranged from 18 to 22 students. Participants and
Teachers 1 through 3 attended the urban school, whereas
Participants and Teachers 4 through 6 attended the sub-
urban school. The racial/ethnic make-up of the classrooms
in the urban school was approximately 70% African American
and 30% Caucasian, whereas in the suburban school, the
percentage was roughly 50% African American and 50%
Caucasian. This study took place during a large group
instruction, teacher-directed academic activity that had
the potential for high rates of opportunities to respond
(Skinner et al., 1996). All instructional activities took
place in the morning.
Materials. During the targeted activity, materials that
are commonly used for language arts instruction were
used in the study (5 in. × 7 in. flash cards). The primary
experimenter developed, along with teachers, consistent
lesson plans and instructional materials to teach content
vocabulary and syllable practice. All six teachers used
sight words that were at an equivalent level of difficulty
and indicated that the content of the cards covered the
same stories and review of previous spelling tests. The
set of flash cards was the same for Teachers 1 and 3 and
Teachers 4, 5, and 6. Teacher 2 opted to use her own
sight word cards.
Teacher Training
Teacher training consisted of two phases: (a) informa-
tion sharing and (b) practice until mastery occurred.
Training was implemented during two 30-minute prac-
tice sessions on two separate days based on procedures
employed by Sutherland et al. (2003). The first author
trained Teachers 1 through 3 and Teachers 4 through 6 on
separate occasions.
Phase 1: Information sharing. The first author pre-
sented a review of the operational definition of opportu-
nities to respond and showed a 5-minute viewing of
video clips of teachers using individual and choral
responding. He then explained the expectations, proce-
dures, and rules for the three responding conditions.
Phase 2: Practice until mastery. Practice of the three
types of opportunities to respond consisted of (a) show-
ing a sight word card to the class; (b) cuing the students
verbally “5-4-3-2-1” to allow adequate wait time, provid-
ing a verbal prompt to respond; (c) providing feedback on
whether the answer was correct or incorrect (e.g., “That
is correct,” or “That is not correct. The correct answer is
______.”); and (d) selecting another sight word card and
beginning the next learning trial (Heward, Courson, &
Narayan, 1989).
Comparison of the Three Interventions
Based on a randomized schedule, the teacher was
instructed to implement either (a) individual responding,
(b) choral responding, or (c) mixed mode responding—
all at a rate of approximately five per minute. This rate
was selected based on findings from Sainato et al. (1987)
that suggested only slight differences between rates of
three and five opportunities to respond per minute. The
faster rate of five opportunities to respond per minute
was selected because this rate produces a pacing that is
appropriate for review of sight words and syllable prac-
tice (Heward, 1994).
During each of the three conditions, teachers imple-
mented the four-step procedure for each learning trial as
indicated in Phase 2 of the teacher training phase: The
four-step procedure was identical for the three conditions
(except for the second step, cueing procedures differed
slightly for each condition; see below).
Individual responding. In this condition, the teacher
followed the four-step procedure; reviewed the proce-
dures, expectations, and rules for individual responding;
and called on each student randomly to pronounce the
word on the sight word card or indicate how many syl-
lables were in the word. During this condition, the total
exposure to questions (i.e., the number of times a student
saw a sight word card presented either to oneself or to a
peer) was approximately 40 over the entire session,
however, the teacher ensured that the number of indi-
vidual opportunities to respond (the number of times the
teacher asked the targeted student to respond per word)
was three.
Choral responding. In this condition, the teacher fol-
lowed the four-step procedure; explained the expecta-
tions, procedures, and rules for the choral responding
condition (specifically cueing procedures); cued all the
students by saying “group”; showed a sight word card to
the class; and cued the entire class to respond. During this
condition, the total exposure to questions and opportuni-
ties to respond was equal and was approximately 40 ques-
tions over the entire session for the targeted student.
Mixed responding. In this condition, the teacher
explained the expectations, procedures, and rules for
the mixed responding condition (specifically cueing
Haydon et al. / Types of Opportunities to Respond 31
procedures) and read from a list developed by the
researcher indicating the type of opportunity to respond,
either a choral or an individual opportunity to respond.
For each individual response, the teacher said, “This is
individual,” showed a sight word card, read the defini-
tion, counted down from five, called on a student, and
asked, “What word?” For each choral response, the
teacher said, “This is for everyone,” showed a sight word
card, read the definition, counted down from five, and
asked the entire class, “What word?” Using a ratio of
70% choral to 30% individual at a rate of approximately
five opportunities to respond per minute yielded 28 cho-
ral responses to 12 individual responses for the 8-minute
session. During choral responding, all students includ-
ing the targeted student responded 28 times. During
individual responding, the targeted student responded 3
times and 9 peers were randomly called to respond
once. During this condition, the total exposure to ques-
tions for the targeted student was approximately 40 over
the entire session, whereas the number of opportunities
to respond was 31.
Dependent Measures
The dependent measures for this study included
(a) disruption, (b) off-task behavior, and (c) active stu-
dent responding. Disruptive behavior was defined as any
behavior demonstrated by the target student that inter-
rupted the flow of instruction or was disruptive to the
on-task behavior of other students. The following behav-
iors are examples of disruptive behaviors: getting up
from seat, touching others, talking to another student,
speaking out loud without raising hand, taking things
from others, throwing objects, making noise (tapping,
banging), moving head up and down or from side to side,
talking to others, rocking in chair, and so forth (Armendariz
& Umbreit, 1999).
Because the activity required student eye contact
with the flash card, off-task behavior was defined as
occurring when the target student was not actively
directed (looking) toward the teacher (e.g., looking
around the room, looking at or drawing on the desk,
playing with materials in the desk, hair, or clothes, etc.;
Miller et al., 1995).
Active student response was defined as engaging in the
behavior that was expected during the specific opportuni-
ties to respond condition and included (a) independent
hand raising for the individual responding, (b) responding
in unison with the group for choral responding, or (c) both,
in the mixed responding condition (Godfrey, Grisham-
Brown, Schuster, & Hemmeter, 2003).
Measurement
All observations lasted a total of 8 minutes; the first 4
minutes consisted of review of sight words followed by
4 minutes of syllable practice using the same sight
words. During this time, the primary researcher served as
the primary observer and collected real-time data using
direct sequential recording of the teachers’ use of oppor-
tunities to respond followed by student active responding
during the activity period. Student disruptive and off-task
behaviors were also recorded during the activity period
using direct recording. Data were collected using a
paper-and-pencil data collection system.
Prior to data collection, the first author spent time in
each classroom for 30 minutes on five separate occasions
to familiarize the students with his presence and decrease
potential reactivity. To accurately capture the occurrence
of both discrete and continuous behaviors, different types
of measurement strategies were used. Student disruption
was measured using a frequency count and translated into
rate per minute using the following formula: frequency of
disruption/total number of minutes (i.e., 8 min). Active
student responses were measured using a percentage for-
mula derived from counting the number of active student
responses following a teacher’s use of a specific opportu-
nity to respond strategy (i.e., individual, choral, or mixed
responding) and dividing each of those numbers by the
total number of questions the student was exposed to.
Student off-task behavior was measured using momen-
tary time sampling. During the 8-minute observation period,
the primary observer continuously observed the teacher and
target student. The observer was cued every 20 seconds (by
a taped tone) to look at the targeted student and code if the
student was off-task at that moment (Gunter et al., 2003).
Because the length of each session was 8 minutes, there was
a total of 24 observations for off-task behavior.
Interobserver Agreement
To provide evidence that the measures of the dependent
variables were accurate, secondary observer(s) collected
interobserver agreement (IOA) data within each condition
of the study (Kennedy, 2005). Data collectors were aware
of which condition was being observed (i.e., individual,
choral, or mixed), but they were unaware of the relative
effectiveness of any condition on the dependent variables.
IOA checks for the dependent variable of disruption
were scored by exact event occurrence only formula
(i.e., an agreement was scored when two observers
scored the same number of events of disruption during
each interval of observation) and calculated by using an
32 Journal of Emotional and Behavioral Disorders
interval agreement formula, dividing the total number of
agreements by the total number of agreements and dis-
agreements and multiplying by 100%. Off-task behavior
was calculated using an interval agreement formula.
Active student responding interobserver agreement was
calculated by using a total agreement method. Both
observers maintained a frequency count of active student
responding and agreement was computed by dividing the
smaller total of occurrence of responding by the larger
total occurrence of responding and multiplying by 100%.
Prior to beginning data collection and IOA data, the
primary and secondary observer(s) were trained to a reli-
ability of at least 85% for three consecutive sessions on
each dependent measure. To control for observer drift,
the primary observer met with the secondary observer(s)
on a weekly basis and/or repeated the training exercises
once every five sessions (Cooper, Heron, & Heward,
1987). Interobserver agreement was calculated during
33.8% of observations. Average interobserver agreement
for disruption was 93.02% (range = 75–100%), off-task
91.5% (range = 80.0–100%), and active student respond-
ing 98.63% (range = 90.47–100%).
Treatment Integrity
Direct measurement of the independent variable (i.e.,
teacher’s implementation of the opportunities to respond
procedure, i.e., individual, choral, or mixed mode at a rate
of five/min) was conducted as a measure of treatment
integrity on approximately 15% of the sessions by two
secondary observers. Having two observers allowed the
researchers to calculate IOA on integrity. Although this is
not typically done (Yarbrough, Skinner, Lee, & Lemmons,
2004), clear support for the claim that the treatment was
implemented as intended can be made when there is
agreement between two independent observers (Noell &
Witt, 1998). Interobserver agreement for treatment integ-
rity was 100% for the rate of opportunities to respond
(between 4.5 and 5.0 per min), the start of syllable prac-
tice, sequence of steps, and steps in the sequence (cue,
wait time, questions, and feedback given).
A checklist sheet was used to record the occurrence or
nonoccurrence of each step of the opportunities to respond
instructional sequence in the individual, choral, and mixed
conditions as described above. The accuracy of the teach-
ers’ implementation of the individual, choral, and mixed
procedures—the four components: cueing students, allow-
ing adequate wait time (counting down by 5), asking ques-
tions, and providing feedback on student responses, as well
as the number of opportunities to respond per 8-minute
session—was calculated using the total agreement approach.
In addition, the accuracy of the teachers’ start of the imple-
mentation of syllable practice after 4 minutes (within 10 s)
was also calculated. During mixed responding, two observ-
ers followed the teachers’ verbal prompt (i.e., “This is
individual.” “This is group.”) and recorded on the treat-
ment integrity checklist the accuracy with which the
teacher implemented the 70:30 ratio. They also recorded
the number of questions asked to the targeted student.
Social Validity
After the completion of the study, the teachers were
asked to complete three social validity surveys to obtain
information about their perception of the acceptability
and usefulness of each type of opportunities to respond.
Teachers rated nine questions using a 4-point Likert-
type scale, where 1 represents not at all and 4 represents
very much. The rating scale consisted of three catego-
ries: (a) teacher’s perceived ease of implementing each
type of intervention, (b) teacher’s perceived effectiveness
of each type of intervention, and (c) teacher’s likelihood
of using each intervention in the future. Mean scores for
each question were calculated by totaling each teacher’s
response and dividing by 6.
Experimental Design and Procedures
An alternating treatments design (Barlow & Hayes,
1979) was used to compare the three types of opportuni-
ties to respond (i.e., individual responding vs. choral
responding vs. mixed responding [70% choral respond-
ing and 30% individual responding]). All sessions were
8 minutes in length; the first 4 minutes consisted of
review of sight words followed by 4 minutes of syllable
practice using the same sight words. Differences between
conditions were determined mainly by noting distinct
separation of data points using visual inspection as well
as by examining mean difference between conditions.
Trend lines were determined by using a split-middle
trend estimation line (Kazdin, 1982).
Results
Treatment Integrity
Treatment integrity data were collected for each
teacher to assess the implementation of each type of
responding condition. Data indicated that the six teachers
implemented the rate of opportunities to respond (between
4.5 and 5.0 per min) and the start of syllable practice
100% of the time. For Teachers 4 through 6, integrity on
sequence of steps was 100%, and integrity on steps in the
Haydon et al. / Types of Opportunities to Respond 33
sequence was 100% for cue, wait time, questions, and
feedback given. For Teacher 1, integrity on sequence of
steps averaged 98.5% (range = 94.03–100%), and integ-
rity on steps in the sequence averaged 100% for cue, wait
time, and questions, and 94.03% for feedback. For
Teacher 2, integrity on sequence of steps averaged 99.8%
(range = 99.2–100%), and integrity on steps in the
sequence was 100% for cue, wait time, and questions,
and 99.2% for feedback (range = 97.6–100%). For
Teacher 3, integrity on sequence of steps averaged 99.7%
(range = 98.68–100%), and integrity on steps in the
sequence averaged 100% for cue, wait time, and ques-
tions, and 98.68% for feedback (range = 97.36–100%).
Disruptive Behavior
Table 2 summarizes the means and ranges of disruptive
behavior per minute, percentage of intervals of off-task
behavior, and percentage of active student responses for
the six students across the three types of opportunities to
respond. For five out of six students, the mean rate of disrup-
tive behavior per minute was less during the mixed
responding condition than during individual responding or
choral responding conditions. Because of the large vari-
ability and extreme scores in Teo’s data, median scores
rather than mean scores are reported (Borg & Gall, 1989).
Figure 1 depicts the rate of disruptive behavior per ses-
sion for the six students. The level of disruptive behavior
was typically lowest during mixed responding and high-
est during individual responding for all students except
for Teo. With the exception of Teo, all the participants’
data indicated stable trend lines with little variability.
Off-Task Behavior
Table 3 summarizes the mean scores and ranges of
percentage of intervals of off-task behavior per type of
opportunities to respond across students. Five out of six
students demonstrated a lower mean percentage of off-task
behavior in the mixed responding condition in comparison
with the individual and choral responding conditions. One
student (Amber) demonstrated a slightly lower mean per-
centage of off-task behavior in the choral responding con-
dition than in the mixed responding condition.
As indicated in Figure 2, the level of off-task behavior
was typically lowest during the mixed responding condi-
tion for Frank, D’Andy, Monty, and Mats, whereas the
level of off-task behavior was typically highest during
the individual responding condition for Frank, Monty,
Amber, and Mats. For Amber, the level of off-task
behavior was lowest during the last two data points of
the choral responding condition. For D’Andy, the level
of off-task behavior was highest during the last two data
points of the choral responding condition. Similar to dis-
ruptive behavior, Teo’s data indicated a great deal of
overlap between the conditions with no clear differences
in magnitude, and with moderate to large variability.
Active Student Responding
The means and ranges for active student responses
across the six participants are presented in Table 4. All
six students demonstrated a higher mean percentage of
active student responding in the mixed responding con-
dition in comparison with the individual responding
condition and a higher percentage of active student
responding in the choral responding condition in com-
parison with individual responding. Results for active
student responding between the mixed and choral
responding were less clear. Three students (Frank,
D’Andy, and Monty) demonstrated a higher mean per-
centage of active student responding in the mixed
responding condition in comparison with the choral
responding condition, whereas three students (Teo,
Amber, and Mats) demonstrated a higher mean percent-
age of active student responding in the choral responding
condition than in the mixed responding condition.
As indicated in Figure 3, the level of percentage of
active student responding was typically highest during
Table
2
Means, Standard Deviations, and Ranges for Disruptive Behavior in Each Condition
Individual Choral Mixed
Student M (SD) Range M (SD) Range M (SD) Range
Frank 1.54 (0.32) 1.25–1.88 0.71 (0.27) 0.25–1.00 0.16 (0.12) 0.00–0.38
D’Andy 0.89 (0.33) 0.50–1.50 0.43 (0.20) 0.25–0.75 0.19 (0.11) 0.00–0.38
Monty 1.21 (0.19) 1.00–1.50 0.81 (0.31) 0.50–1.38 0.49 (0.19) 0.25–0.75
Teo 1.52 (0.87) 0.63–2.75 1.65 (1.28) 0.38–4.13 1.61 (0.48) 0.88–2.75
Amber 1.33 (0.14) 1.13–1.50 0.90 (0.13) 0.75–1.13 0.44 (0.22) 0.16–0.75
Mats 1.25 (0.16) 1.00–1.36 0.35 (0.09) 0.25–0.50 0.08 (0.07) 0.00–0.13
34 Journal of Emotional and Behavioral Disorders
the mixed responding condition for Frank, D’Andy, and
Monty, whereas the level of percentage of active student
responding was typically highest during the choral
responding condition for Amber, Teo, and Mats. For
Frank, Monty, Amber, and Mats, the level of percentage
of active student responding was typically lowest and
had the greatest amount of variability during individual
responding. The least amount of variability for the six
participants occurred during mixed responding.
Social Validity
At the end of the study, all six teachers completed the
social validity questionnaire, which consists of nine
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
20
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Sessions
Frank D’Andy
Monty
Teo
Amber
Mats
Sessions
Figure 1
Rate of Disruptive Behavior per Minute
Haydon et al. / Types of Opportunities to Respond 35
questions with 4-point Likert-type scale responses ranging
from 1 (not at all) to 4 (very). In response to which inter-
vention was the most difficult to implement, four of six
teachers thought the mixed responding was the most dif-
ficult to implement (M = 2.33; range = 1–3), one teacher
believed choral responding was the most difficult, and
one teacher replied that individual responding was most
difficult to implement. Low mean scores (M = 1.0) on
teachers’ perceived difficulty with the study’s procedures
suggested that the teachers implemented individual and
choral responding with ease. High mean scores (M = 4.0)
suggested that teachers found the training sessions to be
very helpful. Although teachers already implemented
individual responding, the midrange scores for choral
(M = 2.83; range = 1–4) and mixed (M = 2.5; range =
1–4) responding suggested that teachers might be likely
to implement choral responding in the future.
Discussion
This study identifies several key findings. First, in
terms of disruptive behavior, mixed responding appears
to be a more effective instructional strategy than either
choral or individual responding. Five of six students had
lower mean rates of disruptive behavior during mixed
responding than during choral or individual responding.
This finding supports Stevens and Rosenshine’s (1981)
strong recommendation for the use of mixed responding
(70% choral, 30% individual). Second, results indicate
that choral responding is a more effective instructional
strategy than individual responding in terms of decreas-
ing disruptive and off-task behavior. Five out of six
participants had lower mean rates of disruptive behavior
and lower mean percentages of intervals of off-task
behavior during choral responding than during individ-
ual responding. This finding is consistent with earlier
research (McKenzie & Henry, 1979; Sainato et al., 1987;
Sindelar et al., 1986; Sutherland et al., 2003).
Differences between choral and mixed responding are
less consistent for off-task behavior. Four students had
fewer intervals of off-task behavior during mixed
responding, and one student had fewer intervals of off-
task behavior during choral responding. However, the
group mean for intervals of off-task behavior during
mixed responding was 18.8%, whereas the group means
for off-task behavior during choral and individual
responding were 26.9% and 42.0%, respectively. Given
the criterion of 90% for student on-task behavior by the
Council for Exceptional Children (1987), only the mixed
responding condition (81.2%) somewhat approached
Table 3
Means, Standard Deviations, and Ranges for Off-Task Behavior in Each Condition
Individual Choral Mixed
Student M (SD) Range M (SD) Range M (SD) Range
Frank 56.25 (6.85) 45.83–62.50 32.73 (5.60) 25.00–41.66 16.55 (7.91) 8.32–25.00
D’Andy 25.60 (6.56) 16.66–33.33 19.05 (5.82) 12.50–25.00 9.89 (4.42) 4.17–16.66
Monty 40.27 (10.21) 29.16–62.50 26.56 (7.69) 12.50–37.50 16.67 (9.32) 8.33–37.
50
Teo 28.47 (20.65) 4.16–62.50 31.24 (20.72) 8.30–62.50 22.02 (9.54) 8.33–33.33
Amber 47.50 (15.21) 25.00–66.70 23.33 (6.97) 16.66–33.33 24.30 (7.17) 16.66–33.33
Mats 54.17 (9.86) 45.83–66.67 28.47 (9.29) 20.83–45.83 23.33 (5.59) 20.83–33.33
Table 4
Means, Standard Deviations, and Ranges for Active
Student Responding in Each Condition
Individual Choral Mixed
Student M (SD) Range M (SD) Range M (SD) Range
Frank 22.31 (13.31) 12.5–46.15 69.34 (16.78) 44.73–89.18 84.35 (5.57) 76.31–94.73
D’Andy 89.32 (7.23) 74.28–94.73 93.25 (6.26) 81.57–100.00 97.28 (4.16) 88.88–100.00
Monty 60.19 (22.68) 33.33–91.89 84.2 (6.34) 71.42–91.66 90.50 (6.47) 77.77–100.00
Teo 82.20 (15.72) 52.63–94.44 93.79 (6.30) 82.60–100.00 84.28 (7.60) 72.72–94.17
Amber 58.10 (22.54) 34.14–92.10 96.38 (2.28) 94.87–100.00 87.65 (7.74) 80.55–97.14
Mats 42.20 (27.88) 13.04–67.50 75.27 (19.06) 40.00–94.44 62.80 (4.27) 56.41–67.50
36 Journal of Emotional and Behavioral Disorders
0
10
20
30
40
50
60
70
80
90
100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
0
10
20
30
40
50
60
70
80
90
100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
0
10
20
30
40
50
60
70
80
90
100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
0
10
20
30
40
50
60
70
80
90
100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
0
10
20
30
40
50
60
70
80
90
100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
0
10
20
30
40
50
60
70
80
90
100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Sessions
Frank D’Andy
Monty Teo
Amber Mats
Sessions
Figure 2
Percentage of Intervals Off-Task
Haydon et al. / Types of Opportunities to Respond 37
CEC standards. However, in comparison with individual
responding, there appears to be a measurable benefit
(Horner et al., 2005).
For active student responding, three of six students
had their highest mean percentages during mixed
responding (M = 90.7%), whereas three students had
their highest mean percentages during choral responding
(M = 82.3%). In light of recommendations by the CEC
(1987), these percentages approach or exceed the 85%
criterion for student correct responses during review.
However, the group mean for active student responding
during individual responding was lowest among the
three conditions (M = 59.1%), and this percentage was
well below the criterion set by the CEC. In addition,
mean percentages were highest for off-task behavior for
all six participants during individual responding, again
being consistent with previous research findings
(McKenzie & Henry, 1979; Sainato et al., 1987; Sindelar
et al., 1986; Sutherland et al., 2003).
The lack of differential effects across the three types
of opportunities to respond on disruptive behavior and
off-task behavior for one student, Teo, deservers further
0
10
20
30
40
50
60
70
80
90
100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
0
10
20
30
40
50
60
70
80
90
100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
0
10
20
30
40
50
60
70
80
90
100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
0
20
40
60
80
100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
0
10
20
30
40
50
60
70
80
90
100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
0
10
20
30
40
50
60
70
80
90
100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Frank D’Andy
Monty
Mats
Teo
Amber
Figure 3
Percentage of Active Student Responding
38 Journal of Emotional and Behavioral Disorders
attention. The mean rate of disruptive behavior was
approximately equal across the three types of opportuni-
ties to respond. Teo’s data for disruptive and off-task
behavior among the three conditions show substantial
variability and overlap. The high rates of Teo’s disrup-
tive behavior and off-task behavior may indicate that the
instructional intervention of mixed or choral responding
was not powerful enough to decrease his disruptive
behavior and off-task behavior. For example, incidental
observations indicate that during the teacher feedback
procedure, Teo talked with a peer sitting next to him and
the peer responded. It is possible that teacher prompts
could have decreased the rate of Teo’s disruptive behav-
ior and frequency of off-task behavior. However, his
teacher informally reported that she did not feel comfort-
able implementing and following up on negative conse-
quences because she was not his homeroom teacher.
Teo’s disruptive and off-task behavior may also have
been altered by the presence of setting factors (Davis &
Fox, 1999). For example, Teo suffered from migraine
headaches and this was not discovered until halfway
through the study. Furthermore, the teacher indicated that
she was aware of serious problem behavior during transi-
tion time before language arts.
Social validity data reveal that the six teachers felt
that the study did not disrupt their classroom environ-
ment and that the training session was very helpful. All
six teachers stated that they currently used individual
responding and indicated that choral responding was
easy to implement, supporting earlier research wherein
teachers provided similar feedback (Sainato et al., 1987).
Four of six teachers commented that mixed responding
was the most difficult type of opportunities to respond to
implement because they had to read a randomized list.
Instead, these teachers endorsed approximating the 70%
choral to 30% individual ratio from memory, indicating
the acceptability of mixed responding as a teaching strat-
egy (Schwartz & Baer, 1991). However, Teachers 1 and
6 reported that they would be very likely to use mixed
responding in the future. Teacher 6 commented that the
mixed responding had an “element of surprise” because
students did not know if they were called on individually
until the “very last second.” After a visual inspection of
the data, Teacher 1 stated that she would be very likely
to use mixed responding in the future. Implementing
increased rates of opportunities to respond that fit within
the details of day-to-day classroom instruction and that
do not radically alter teachers’ curriculum are a few ways
researchers can get teachers to maintain evidence-based
practices in their classrooms (Gersten, Vaughn, Deshler,
& Schiller, 1997).
It is interesting that most teachers’ perceptions of the
effects of the three types of opportunities to respond on
the dependent variables were not confirmed by the data.
For example, among the five teachers where mixed
responding produced the lowest rate of disruptive behav-
ior, only Teacher 3 had noticed decreases in disruptive
behavior after implementing the mixed responding pro-
cedure; the other four teachers believed choral respond-
ing produced the largest effect. The fact that the teachers
did not reliably discern the differential effects of the
three different teaching strategies makes a strong case
for using data collection and using objective criteria to
make decisions about student classroom behavior (Witt,
VanDerHyeden, & Gilbertson, 2004).
Although mixed responding appeared to be more
effective in reducing disruptive behavior than choral and
individual responding for five out of six students, a few
limitations may temper the power of the statements that
can be made as a result of this study. First, as is inherent
in all single subject research designs, the small sample
size limits the generalizability of the findings. Thus, gen-
eralization to other academic activities and other settings,
or to students by age, grade, gender, or learning histories,
requires systematic replication (Kazdin, 1982). However,
obtaining similar responses across individuals and two
different types of schools suggests that the effect of
mixed responding might be generalizable (Trolinder,
Choi, & Proctor, 2004).
Second, there are several overlapping data points
among the participants’ dependent variables with active
student responding during choral and mixed responding.
Thus, it is difficult to determine which instructional
strategy is most effective in increasing active student
responding. Third, although two observers were used to
assess treatment integrity data and IOA was 100%, only
15% of the treatment sessions were observed.
Fourth, teacher implementation of contingent conse-
quences outside of the learning trial was not recorded.
Therefore, the extent of teacher use of individual attention,
punishment, or extinction on the outcomes of the depen-
dent variables is not known. For example, teacher attention
may have affected the percentage of intervals of off-task
behavior. Skinner and colleagues (1994) noted a similar
limitation and reported in their study that individual atten-
tion might have been functionally related to high rates of
attention to tasks.
As a logical next step, further research could compare
choral responding with mixed responding: with students
of different ages and across various subject areas such as
math and science (Carnine, 1976), across sessions of more
than 8 minutes (Sainato et al., 1987), and with children
Haydon et al. / Types of Opportunities to Respond 39
identified with various learning disabilities or with autism
(Koegel, Dunlap, & Dyer, 1980). These extensions would
help establish and verify the conditions under which vary-
ing types of responding are more effective and efficient.
In addition, further research would do well to include
summative assessments at the end of the study to measure
the effect of the three types of opportunities to respond on
individual student learning. For example, researchers
could examine the influence of the three types of oppor-
tunities to respond on sight word acquisition and then
measure increases in reading comprehension or sight
word vocabulary (Skinner & Shapiro, 1989). Because the
effects of the three types of opportunities to respond on
one student were inconclusive, researchers could use
functional assessments to gather information on the ante-
cedent and consequent events that are associated with the
occurrence of challenging behaviors in combination with
instructional strategies (Scott & Kamps, 2007). In addi-
tion, social validity could be obtained from the students’
perspective as part of future directions. Finally, research-
ers should continue to investigate an optimal rate of
opportunities to respond on the percentage of correct
responses and error rates (West & Sloane, 1986).
Implications for Practice
Before implementing the mixed and choral responding
procedures, teachers could consider that for a few stu-
dents who lack impulse control, the implementation of
precorrection strategies (i.e., reminding students to remain
quiet after each response and to use inside voices) may be
needed. The long-term benefits of using a systematic
questioning strategy may outweigh the initial time
involved to acquire a new instructional technique. These
benefits include the following: students can respond up to
three or four times more (depending on group size) dur-
ing choral responding than during individual responding
(Sindelar et al., 1986), and teachers could use mixed and
choral responding to reduce disruptive and off-task
behavior and reduce the amount of time students pas-
sively attend during instruction (Sterling, Barbetta,
Heward, & Heron, 1997).
References
Anderson, L. M., Evertson, C. M., & Brophy, J. E. (1979). An exper-
imental study of effective teaching in first grade reading groups.
The Elementary School Journal, 79, 193–223.
Armendariz, F., & Umbreit, J. (1999). Using active responding to
reduce disruptive behavior in a general education classroom.
Journal of Positive Behavior Interventions, 1, 152–158.
Barbetta, P. M., & Heward, W. L. (1993). Effects of active student
response during error correction on the acquisition and mainte-
nance of geography facts by elementary students with learning
disabilities. Journal of Behavioral Education, 3, 217–233.
Barlow, D. H., & Hayes, S. C. (1979). Alternating treatments design:
One strategy for comparing the effects of two treatments in a single
subject. Journal of Applied Behavior Analysis, 12, 199–210.
Borg, W. R., & Gall, M. D. (1989). Educational research: An intro-
duction. White Plains, NY: Longman.
Carnine, D. W. (1976). Effects of two teacher-presentation rates on
off-task behavior, answering correctly, and participation. Journal
of Applied Behavior Analysis, 9, 199–206.
Cooper, J. O., Heron, T. E., & Heward, W. L. (1987). Applied behavior
analysis. Columbus, OH: Merrill.
Council for Exceptional Children. (1987). Academy for effective
instruction: Working with mildly handicapped students. Reston,
VA: Author.
Davis, C. A., & Fox, J. (1999). Evaluating environmental arrange-
ment as setting events: Review and implications for measurement.
Journal of Behavioral Education, 9, 77–96.
Gersten, R., Vaughn, S., Deshler, D., & Schiller, E. (1997). What we
know about using research findings: Implications for improving
special education practice. Journal of Learning Disabilities, 30,
466–476.
Godfrey, S. A., Grisham-Brown, J., Schuster, J. W., & Hemmeter, M. L.
(2003). The effects of three techniques on student participation
with preschool children with attending problems. Education and
Treatment of Children, 26, 255–272.
Good, T. L. (1970). Which pupils do teachers call on? The Elementary
School Journal, 70, 190–198.
Greenwood, C. R. (2001). Science and students with learning and
behavioral problems. Behavioral Disorders, 27, 37–52.
Greenwood, C. R., Delquadri, J., & Hall, R. V. (1984). Opportunity
to respond and student academic achievement. In W. L. Heward,
T. E. Heron, D. S. Hill, & J. Trap-Porter (Eds.), Focus on behavior
analysis in education (pp. 58–88). Columbus, OH: Merrill.
Gunter, P. L., Reffel, J. M., Barnett, C. A., Lee, J. M., & Patrick, J.
(2004). Academic response rates in elementary-school class-
rooms. Education and Treatment of Children, 27, 105–113.
Gunter, P. L., Shores, R. E., Jack, S. L., Denny, R. K., & DePaepe, P. A.
(1994). A case study of the effects of altering instructional interac-
tions on the disruptive behavior of a child identified with severe
behavior disorders. Education and Treatment of Children, 17,
435–444.
Gunter, P. L., Venn, M. L., Patrick, J., Miller, K. A., & Kelly. L.
(2003). Efficacy of using momentary time samples to determine
on-task behavior of students with emotional/behavioral disorders.
Education and Treatment of Children, 26, 400-412.
Hastings, R. P., & Oakford, S. (2003). Student teachers’ attitudes
towards the inclusion of children with special needs. Educational
Psychology, 23, 87–94.
Haydon, T., Mancil, G. R., & VanLoan, C. (in press). The effects of
opportunities to respond on the on-task behavior for a student
emitting disruptive behaviors in a general education classroom: A
case study. Education and Treatment of Children.
Heward, W. L. (1994). Three “low tech” strategies for increasing the
frequency of active student response during group instruction. In
R. Gardner, III, D. M. Sainato, J. O. Cooper, & T. E. Heron (Eds.),
Behavior analysis in education: Focus on measurably superior
instruction (pp. 283–320). Monterey, CA: Brooks/Cole.
Heward, W. L., Courson, F. H., & Narayan, J. S. (1989). Using choral
responding to increase active student response. Teaching Exceptional
Children, 21, 72–75.
Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery M.
(2005). The use of single-subject research to identify evidence-based
practice in special education. Exceptional Children, 71, 165-180.
40 Journal of Emotional and Behavioral Disorders
Kazdin, A. E. (1982). Single case research designs. New York: Oxford
University Press.
Kennedy, C. H. (2005). Single-case designs for educational research.
Boston: Allyn & Bacon.
Koegel, R. L., Dunlap, G., & Dyer, K. (1980). Intertrial interval duration
and learning in autistic children. Journal of Applied Behavior
Analysis, 13, 91–99.
McKenzie, G. R., & Henry, M. (1979). Effects of testlike events on
on-task behavior, test anxiety, and achievement in a classroom
rule-learning task. Journal of Educational Psychology, 71,
370–374.
Miller, A. D., Hall, M. A., & Heward, W. L. (1995). Effects of
sequential 1-minute time trials with and without inter-trial
feedback and self-correction on general and special education
students’ fluency with math facts. Journal of Behavioral Education,
5, 319–345.
Nelson, J. R., & Roberts, M. L. (2002). Ongoing reciprocal teacher-
student interactions involving disruptive behaviors in general
education classrooms. Journal of Emotional & Behavioral
Disorders, 8, 27–39.
Noell, G. H., & Witt, J. C. (1998). Toward a behavior analytic approach
to consultation. In T. S. Watson & F. M. Gresham (Eds.), Handbook
of child behavior therapy (pp. 41–57). New York: Plenum.
Sainato, D. M., Strain, P. S., & Lyon, S. R. (1987). Increasing academic
responding of handicapped preschool children during group instruc-
tion. Journal of the Division for Early Childhood, 12, 23–30.
Schwartz, I. S., & Baer, D. M. (1991). Social-validity assessments: Is
current practice state of the art? Journal of Applied Behavior
Analysis, 24, 189–204.
Scott, T. M., & Kamps, D. M. (2007). The future of functional behavior
assessment in school settings. Behavioral Disorders, 32, 146–157.
Sindelar, P. T., Bursuck, W. D., & Halle, J. W. (1986). The effects of two
variations of teacher questioning on student performance. Education
and Treatment of Children, 9, 56–66.
Skinner, C. H., Belfiore, P. J., Mace, H. W., William-Wilson, S., &
Johns, G. A. (1997). Altering response topography to increase response
efficiency and learning rates. School Psychology Quarterly, 12,
54–64.
Skinner, C. H., Fletcher, P. A., & Henington, C. (1996). Increasing
learning rates by increasing student responses rates: A sum-
mary of research. School Psychology Quarterly, 11, 313-325.
Skinner, C. H., Johnson, C. W., Larkin, J., Lessley, D. J., &
Glowacki, M. L. (1995). The influence of rate of presentation
during taped-words interventions on reading performance. Journal
of Emotional & Behavioral Disorders, 3, 214–224.
Skinner, C. H., & Shapiro, E. S. (1989). A comparison of taped-words
and drill interventions on reading fluency in adolescents with behav-
ior disorders. Education and Treatment of Children, 12, 123–133.
Skinner, C. H., Smith, E. S., & McLean, J. E. (1994). The effects of
intertrial interval duration on sight-word learning rates in children
with behavioral disorders. Behavioral Disorders, 19, 98–107.
Sterling, R. M., Barbetta, P. M., Heward, W. L., & Heron, T. E. (1997). A
comparison of active student response and on-task instruction on the
acquisition and maintenance of health facts by fourth grade special
education students. Journal of Behavioral Education, 7, 151–165.
Stevens, R., & Rosenshine, B. (1981). Advances in research on teaching.
Exceptional Education Quarterly, 2, 1–9.
Sutherland, K. S., Alder, N., & Gunter, P. L. (2003). The effect of
increased rates of opportunities to respond on the classroom
behavior of students with emotional/behavioral disorders. Journal
of Emotional and Behavioral Disorders, 11, 239–248.
Trolinder, D. M., Choi, H., & Proctor, T. B. (2004). Use of delayed
praise as a directive and its effectiveness on on-task behavior.
Journal of Applied School Psychology, 20, 61–83.
Walker, H. M., & Severson, H. H. (1993). Systematic Screening for
Behavior Disorders. Longmont, CO: Sopris West.
West, R. P., & Sloane, H. N. (1986). Teacher presentation rate
and point delivery rate: Effects on classroom disruption, perfor-
mance accuracy, and response rate. Behavior Modification, 10,
267–286.
Witt, J. C., VanDerHyeden, A. M., & Gilbertson, D. (2004).
Troubleshooting behavioral interventions: A systematic process
for finding and eliminating problems. School Psychology Review,
33, 363–383.
Wolery, M., Ault, M. J., Doyle, P. M., Gast, D. L., & Griffin, A. M.
(1992). Choral and individual responding: Identification of
interactional effects. Education and Treatment of Children, 15,
289–309.
Yarbrough, J. L., Skinner, C. H., Lee, Y. J., & Lemmons, C. (2004).
Decreasing transition times in a second grade classroom: Scientific
support for the timely transitions game. Journal of Applied School
Psychology, 20, 85–107.
Todd Haydon, PhD, is an assistant professor of special educa-
tion at the University of Cincinnati. His research interests
include effective teaching practices, functional behavior
assessments, and positive behavior and supports.
Maureen A. Conroy, PhD, is a professor of special education
at Virginia Commonwealth University. Her research interests
include students with learning and behavior disorders.
Terrance M. Scott, PhD, is a professor and director of Special
Education Programs at the University of Louisville and con-
ducts research on interventions related to student academic
and behavioral success.
Paul T. Sindelar, PhD, is a professor of special education at
the University of Florida. His research interests include
teacher preparation, induction, and mentoring.
Brian R. Barber, MEd, is a doctoral student at the University
of Florida. His research interests include the neuropsychologi-
cal bases of emotional and behavioral disorders and the inter-
face of these characteristics with social-emotional learning.
Ann-Marie Orlando, MS, is a doctoral student at the
University of Florida. Her current interests include communi-
cation, assistive technology, and inclusion.
This technical assistance document was adapted from the PBIS Technical Brief on Classroom PBIS Strategies written by: Brandi Simonsen, Jennifer Freeman,
Steve Goodman, Barbara Mitchell, Jessica Swain-Bradway, Brigid Flannery, George Sugai, Heather George, and Bob Putman, 2015.
Additional assistance was provided to the Office of Special Education Programs by Brandi Simonsen and Jenifer Freeman. Special thanks to Allison Blakely,
Ambra Green, and Jennifer Rink, OSEP interns who also contributed to the development of this document.
1 | P a g e
Purpose and Description
What is the purpose of this document?
The purpose of this document is to summarize evidence-based, positive, proactive, and responsive classroom behavior intervention and support strategies for
teachers. These strategies should be used classroom-wide, intensified for support small-group instruction, or amplified further for individual students. These
strategies can help teachers capitalize on instructional time and decrease disruptions, which is crucial as schools are held to greater academic and social
accountability measures for all students.
What needs to be in place before I can expect these strategies to work?
The effectiveness of these classroom strategies are maximized when: (a) the strategies are implemented within a school-wide multi-tiered behavioral
framework, such as school-wide positive behavioral interventions and supports (PBIS; see www.pbis.org); (b) classroom and school-wide expectations and
systems are directly linked; (c) classroom strategies are merged with effective instructional design, curriculum, and delivery; and (d) classroom-based data
are used to guide decision making. The following school- and classroom-level supports should be in place to optimize the fidelity and benefits of
implementation.
School-level supports
• A multi-tiered framework, including strategies for identifying and teaching
expectations, acknowledging appropriate behavior, and responding to
inappropriate behavior
• The school-wide framework is guided by school-wide discipline data
• Appropriate supports for staff are provided, including leadership teaming,
supporting policy, coaching, and implementation monitoring
Classroom-level supports
• Classroom system for teaching expectations, providing acknowledgments,
and managing rule violations linked to the school-wide framework
• Classroom management decisions are based on classroom behavioral data
• Effective instructional strategies implemented to the greatest extent
possible
• Curriculum is matched to student need and supporting data
www.pbis.org
2 | P a g e
What are the principles that guide the use of these strategies in the classroom?
The purpose of the guiding principles is to define the characteristics and cultural features that drive the use of these classroom strategies within a multi-tiered
framework. The guiding principles help establish the fundamental norms, rules, and ethics that are essential to the success of these classroom strategies
within a multi-tiered framework. These seven principles are the foundational values that drive the success of these classroom strategies and are important to
keep in mind when developing contextually appropriate adaptations of the strategies suggested in this document.
Professional Business-like, objective, neutral, impartial, and unbiased
Cultural Considerate of individual’s learning history and experience s (e.g., family, community, peer group)
Informed Data-based, response-to-intervention
Fidelity-Based Implementation accuracy is monitored and adjusted as needed
Educational The quality of design and delivery of instruction is considered
Instructive Expected behaviors are explicitly taught, modeled, monitored, and reinforced
Preventive Environment arranged to encourage previously taught social skills and discourage anticipated behavior errors
User Guide
What is included in this guide?
There are three main parts to this guide on classroom PBIS strategies.
1. Interactive map with corresponding tables, tools, and tips. The interactive map provides the links to the document with the
content to support the implementation of the essential features of these classroom strategies.
2. Self-assessment and decision-making chart. These tools are intended to help guide the user to the parts of the document that
will be most useful.
3. Scenarios. Two scenarios are provided to extend learning and provide concrete examples of how to use classroom PBIS strategies
and many of the tools suggested in this document in consortium.
A short summary and references are provided at the conclusion of the document.
3 | P a g e
What is not included in this guide?
This guide should not be considered a replacement for more comprehensive trainings and does not provide the depth of knowledge/research about each
topic. Although many of the strategies suggested in this document can be used for individual students, more support likely will be needed from a behavior
specialist or school psychologist for teachers who work with students with more intensive support needs.
This document also does not include strategies for addressing violent or unlawful student conduct.
Where do I start?
The interactive map provides an organizational layout of the document and some basic definitions of terms that may be helpful to know prior to taking the
self-assessment. Teachers should begin with the self-assessment to gauge current classroom management practices. The self-assessment is designed to help
teachers know where to focus their attention (e.g., foundations, practices, data systems). After teachers take the self-assessment, the interactive map will
direct them to content that will be most useful. The decision-making flow chart should be used to help guide teachers in making decisions about making
adjustments within their classrooms.
4 | P a g e
Interactive Map of Core Features
Classroom Interventions and Supports
Foundations (Table 1)
1.1 Settings
The physical layout osf the
classroom is designed to be
effective
1.2 Routines
Predictable classroom
routines are developed and
taught
1.3 Expectations
Three to five classroom rules
are clearly posted, defined,
and explicitly taught
Practices (Table 2)
Prevention
2.1 Supervision
Provide reminders
(prompts), and actively
scan, move, and interact
with students
2.2 Opportunity
Provide high rates and
varied opportunities for all
students to respond
2.3 Acknowledgment
Using specific praise and
other strategies, let
students know when they
meet classroom
expectations
2.4 Prompts and
Precorrections
Provide reminders, before
a behavior is expected,
that clearly describe the
expectation
Response
2.5 Error Corrections
Use brief, contingent, and
specific statements when
misbehavior occurs
2.6 Other Strategies
Use other strategies that
preempt escalation,
minimize inadvertent
reward of the problem
behavior, create a
learning opportunity for
emphasizing desired
behavior, and maintain
optimal instructional time
2.7 Additional Tools
More tips for teachers
Data Systems (Table 3)
3.1 Counting
Record how often or how
many times a behavior
occurs (also called
frequency)
3.2 Timing
Record how long a behavior
lasts (also called duration).
3.3 Sampling
Estimate how often a
behavior occurs during part
of an interval, the entire
interval, or at the end of an
interval
3.4 ABC Cards, Incident
Reports, or Office
Discipline Referrals
Record information about
the events that occurred
before, during, and after a
behavior incident
5 | P a g e
Self-Assessment
Teachers should start with the first statement on the self-assessment. When unsure of an answer, teachers should go to the part of the interactive map
indicated and read more about the practice.
Classroom Interventions and Supports Self-Assessment Yes No
1. The classroom is physically designed to meet the needs of all students.
If yes, continue with self-assessment. If no, begin with 1.1 on the interactive map.
2. Classroom routines are developed, taught, and predictable.
If yes, continue with self-assessment. If no, begin with 1.2 on the interactive map.
3. Three to five positive classroom expectations are posted, defined, and explicitly taught.
If yes, continue with self-assessment. If no, begin with 1.3 on the interactive map.
4. Prompts and active supervision practices are used proactively.
If yes, continue with self-assessment. If no, begin with 2.1 on the interactive map.
5. Opportunities to respond are varied and are provided at high rates.
If yes, continue with self-assessment. If no, begin with 2.2 on the interactive map.
6. Specific praise and other strategies are used to acknowledge behavior.
If yes, continue with self-assessment. If no, begin with 2.3 on the interactive map.
7. Reminders are consistently given before a behavior might occur.
If yes, continue with self-assessment. If no, begin with 2.4 on the interactive map.
8. The responses to misbehaviors in the classroom are appropriate and systematic.
If yes, continue with self-assessment. If no, begin with 2.5 on the interactive map.
9. Data systems are used to collect information about classroom behavior.
If yes, continue with self-assessment. If no, begin with Table 3 on the interactive map.
If yes on all, celebrate successes! Continually monitor, and make adjustments as needed.
6 | P a g e
Decision-Making Chart
The decision-making chart will help guide teachers regarding implementation of best practices in preventing and responding to behaviors in the classroom.
7 | P a g e
Table 1. Matrix of Foundations for Classroom Interventions and Supports
1.1 SETTING
S
EFFECTIVELY DESIGN THE PHYSICAL ENVIRONMENT OF THE CLASSROOM
Description
and Critical Features
What key strategies can I use
to support behavior in my
classroom?
Elementary
Examples
How can I use this practice in
my elementary classroom?
Secondary
Examples
How can I use this practice in
my secondary classroom?
Non-
Examples
What should I avoid when I’m
implementing this practice?
Empirical Support
and Resources
What evidence supports this
practice, and where can I find
additional resources?
• Design classroom to
facilitate the most typical
instructional activities (e.g.,
small groups, whole group,
learning centers)
• Arrange furniture to allow
for smooth teacher and
student movement
• Assure instructional
materials are neat, orderly,
and ready for use
• Post materials that support
critical content and learning
strategies (e.g., word walls,
steps for the writing
process, mathematical
formulas)
• Design classroom layout
according to the type of
activity taking place:
– Tables for centers
– Separate desk for
independent work
– Circle area for group
instruction
• Consider teacher versus
student access to materials
• Use assigned seats and
areas
• Be sure all students can be
seen
• Design classroom layout
according to the type of
activity taking place:
– Circle for discussion
– Forward facing for group
instruction
• Use assigned seats
• Be sure all students can be
seen
• Consider options for storage
of students’ personal items
(e.g., backpacks, notebooks
for other classes)
• Equipment and materials are
damaged, unsafe, and/or
not in sufficient working
condition or not accessible
to all students
• Disorderly, messy, unclean,
and/or visually unappealing
environment
• Some students and/or parts
of the room not visible to
teacher
• Congestion in high-traffic
areas (e.g., coat closet,
pencil sharpener, teacher
desk)
• Inappropriately sized
furniture
• Teachers can prevent many
instances of problem
behavior and minimize
disruptions by strategically
planning the arrangement of
the physical environment1
1 Wong & Wong, 2009
• Arranging classroom
environment to deliver
instruction in a way that
promotes learning2
2 Archer & Hughes, 2011
Video:
http://louisville.edu/education/ab
ri/primarylevel/structure/group
Book:
Structuring Your Classroom for
Academic Success3
3 Paine, Radicchi, Rosellini, Deutchman, & Darch, 1983
http://louisville.edu/education/abri/primarylevel/structure/group
http://louisville.edu/education/abri/primarylevel/structure/group
8 | P a g e
1.2 ROUTINES
DEVELOP AND TEACH PREDICTABLE CLASSROOM ROUTINES
Description
and Critical Features
What key strategies can I use
to support behavior in my
classroom?
Elementary
Examples
How can I use this practice in
my elementary classroom?
Secondary
Examples
How can I use this practice in
my secondary classroom?
Non-
Examples
What should I avoid when I’m
implementing this practice?
Empirical Support
and Resources
What evidence supports this
practice, and where can I find
additional resources?
• Establish predictable
patterns and activities
• Promote smooth operation
of classroom
• Outline the steps for
completing specific
activities
• Teach routines and
procedures directly
• Practice regularly
• Recognize students when
they successfully follow
classroom routines and
procedures
• Create routines and
procedures for the most
problematic areas or times
• Promote self-managed or
student-guided schedules
and routines
• Establish routines and
procedures for:
– Arrival and dismissal
– Transitions between
activities
– Accessing help
– What to do after work is
completed
• Example arrival routines:
– Hang up coat and
backpack
– Put notes and homework
in the “In” basket
– Sharpen two pencils
– Go to desk and begin the
warm-up activities listed
on the board
– If you finish early, read a
book
• Consider routines and
procedures for:
– Turning in work
– Handing out materials
– Making up missed work
– What to do after work is
completed
• Example class period
routines:
– Warm-up activity for
students
– Review of previous
content
– Instruction for new
material
– Guided or independent
practice opportunities
– Wrap-up activities
• Assuming students will
automatically know your
routines and procedures
without instruction and
feedback
• Omitting tasks that students
are regularly expected to
complete
• Missing opportunities to
provide: (a) visual and/or
auditory reminders to
students about your routines
and procedures (e.g., signs,
posters, pictures, hand
signals, certain music
playing, timers) and/or (b)
feedback about student
performance
• Establishing classroom
routines and procedures
early in the school year
increases structure and
predictability for students;
when clear routines are in
place and consistently used,
students are more likely to
be engaged with school and
learning and less likely to
demonstrate problem
behavior 4
4 Kern & Clemens, 2007
• Student learning is
enhanced by teachers’
developing basic classroom
structure (e.g., routines and
procedures)5
5 Soar & Soar, 1979
Podcast: http://pbismissouri.org/a
rchives/1252
Video: https://www.teachingchan
nel.org/videos/create-a-safe-
classroom
http://pbismissouri.org/archives/1252
http://pbismissouri.org/archives/1252
https://www.teachingchannel.org/videos/create-a-safe-classroom
https://www.teachingchannel.org/videos/create-a-safe-classroom
https://www.teachingchannel.org/videos/create-a-safe-classroom
9 | P a g e
1.3 EXPECTATIONS
POST, DEFINE, AND TEACH THREE TO FIVE POSITIVE CLASSROOM EXPECTATIONS
Description and Critical
Features
What key strategies can I use
to support behavior in my
classroom?
Elementary
Examples
How can I use this practice in
my elementary classroom?
How can I use this practice in
my secondary classroom?
Non-
Examples
Secondary
Examples
What should I avoid when I’m
implementing this practice?
Empirical Support
and Resources
What evidence supports this
practice, and where can I find
additional resources?
• If in a school implementing
a multi-tiered behavioral
framework, such as school-
wide PBIS, adopt the three
to five positive school-wide
expectations as classroom
expectations
• Expectations should be
observable, measurable,
positively stated,
understandable, and always
applicable
• Teach expectations using
examples and non-examples
and with opportunities to
practice and receive
feedback
• Involve students in defining
expectations within
classroom routines
(especially at the secondary
level)
• Obtain student commitment
to support expectations
• Post:
– Prominently in the
classroom
– Example: Be safe, Be
respectful, Be ready, Be
responsible
• Define for each classroom
setting or routine:
– Being safe means hands
and feet to self during
transitions
– Being safe means using
all classroom materials
correctly
• Teach:
– Develop engaging
lessons to teach the
expectations
– Regularly refer to
expectations when
interacting with students
(during prompts, specific
praise, and error
corrections)
• Post:
– Prominently in the
classroom
– Example: Be respectful,
Be responsible, Be a
good citizen, Be ready to
learn
• Define for each classroom
setting or routine:
– Being respectful means
using inclusive language
– Being responsible means
having all materials
ready at the start of
class
• Teach:
– Develop engaging
lessons to teach the
expectations
– Regularly refer to
expectations when
interacting with students
• Assuming students will
already know your
expectations
• Having more than five
expectations
• Listing only behaviors you
do not want from students
(e.g., no cell phones, no
talking, no gum, no hitting)
• Creating expectations that
you are not willing to
consistently enforce
• Selecting expectations that
are inappropriate for
developmental or age level
• Choosing expectations that
do not sufficiently cover all
situations
• Ignoring school-wide
expectations
• A dependable system of
rules and procedures
provides structure for
students and helps them to
be engaged with
instructional tasks6
6 Brophy, 2004
• Teaching rules and routines
to students at the beginning
of the year and enforcing
them consistently across
time increases student
academic achievement and
task engagement7
7 Evertson & Emmer, 1982; Johnson, Stoner, & Green, 1996
Case Study:
http://iris.peabody.vanderbilt.ed
u/wp-
content/uploads/2013/07/ICS-
003
Podcast: http://pbismissouri.org
/archives/1243
Videos:
http://louisville.edu/education/a
bri/primarylevel/expectations/gr
oup
http://iris.peabody.vanderbilt.edu/wp-content/uploads/2013/07/ICS-003
http://iris.peabody.vanderbilt.edu/wp-content/uploads/2013/07/ICS-003
http://iris.peabody.vanderbilt.edu/wp-content/uploads/2013/07/ICS-003
http://iris.peabody.vanderbilt.edu/wp-content/uploads/2013/07/ICS-003
http://pbismissouri.org/archives/1243
http://pbismissouri.org/archives/1243
http://louisville.edu/education/abri/primarylevel/expectations/group
http://louisville.edu/education/abri/primarylevel/expectations/group
http://louisville.edu/education/abri/primarylevel/expectations/group
10 | P a g e
Table 2. Matrix of Practices for Classroom Interventions and Supports
2.1 SUPERVISION
USE ACTIVE SUPERVISION AND PROXIMITY
Practice Description and
Critical Features
What key strategies can I use
to support behavior in my
classroom?
Elementary Examples
How can I use this practice in
my elementary classroom?
Secondary Examples
How can I use this practice in
my secondary classroom?
Non-Examples
What should I avoid when I’m
implementing this practice?
Empirical Support and
Resources
What evidence supports this
practice, and where can I find
additional resources?
A process for monitoring the
classroom, or any school setting,
that incorporates moving,
scanning, and interacting
frequently with students8
8 DePry & Sugai, 2002
Includes:
• Scanning: visual sweep of
entire space
• M oving: continuous
movement, proximity
• I nteracting: verbal
communication in a
respectful manner, any
precorrections, non-
contingent attention, specific
verbal feedback
• While students are working
independently in centers,
scan and move around the
classroom, checking in with
students
• While working with a small
group of students, frequently
look up and quickly scan the
classroom to be sure other
students are still on track
• During transitions between
activities, move among the
students to provide
proximity; scan continuously
to prevent problems, and
provide frequent feedback as
students successfully
complete the transition
• While monitoring students,
move around the area,
interact with students, and
observe behaviors of
individuals and the group;
scan the entire area as you
move around all corners of
the area
• Briefly interact with
students: ask how they are
doing, comment, or inquire
about their interests; show
genuine interest in their
responses (This is an
opportunity to connect
briefly with a number of
students)
• Sitting or standing where
you cannot see the entire
room or space, such as
with your back to the group
or behind your desk
• Walking the same,
predictable route the entire
period of time, such as
walking the rows of desks
in the same manner every
period
• Stopping and talking with a
student or students for
several minutes
• Interacting with the same
student or groups of
students every day
• Combining prompts or
precorrection with active
supervision is effective across
a variety of classroom and
non-classroom settings 9
9 Colvin, Sugai, Good, & Lee, 1997; DePry & Sugai, 2002; Lewis, Colvin, & Sugai, 2000
Module: http://pbismissouri.org/arc
hives/1304
Video: http://louisville.edu/educati
on/abri/primarylevel/supervision/gr
oup
IRIS Ed
(secondary): https://www.youtube.
com/watch?v=rCqIzeU-0hQ
http://pbismissouri.org/archives/1304
http://pbismissouri.org/archives/1304
http://louisville.edu/education/abri/primarylevel/supervision/group
http://louisville.edu/education/abri/primarylevel/supervision/group
http://louisville.edu/education/abri/primarylevel/supervision/group
11 | P a g e
2.2 OPPORTUNITY
PROVIDE HIGH RATES AND VARIED OPPORTUNITIES TO RESPOND
Description and Critical
Features
What key strategies can I use
to support behavior in my
classroom?
Elementary
Examples
How can I use this practice in
my elementary classroom?
Secondary
Examples
How can I use this practice in
my secondary classroom?
Non-Examples
What should I avoid when I’m
implementing this practice?
Empirical Support
and Resources
What evidence supports this
practice, and where can I find
additional resources?
A teacher behavior that requests
or solicits a student response
(e.g., asking a question,
presenting a demand)
Opportunities to respond include:
• Individual or small-
group questioning:
– Use a response pattern
to make sure that all
students are called on
• Choral responding:
– All students in a class
respond in unison to a
teacher question
• Nonverbal responses:
– Response cards, student
response systems,
guided notes
• Individual or small-
group questioning:
– Student names can be
on a seating chart, strips
of paper, or popsicle
sticks in a can or jar; as
questions are posed, a
student name is drawn
• Choral responding:
– Students read a morning
message out loud
together
– Students recite letter
sounds together
• Nonverbal responses:
– Thumbs up if you agree
with the character’s
choice in our story
• Individual or small-
group questioning:
– I just showed you how to
do #1; I am going to
start #2 second row; get
ready to help explain my
steps
• Choral responding:
– Write a sentence to
summarize the reading;
then share with your
peer partner before
sharing with me
• Nonverbal responses:
– Hands up if you got 25
for the answer
– Get online and find two
real-life examples for
“saturation point”
• A teacher states, “We
haven’t talked about this at
all, but you will summarize
the entire chapter for
homework. Work quietly for
45 minutes on this new
content, and I will collect
your papers at the end of
class.” (This is not
sufficiently prompted and
does not promote frequent
active engagement.)
• A teacher provides a 20-
minute lesson without
asking any questions or
prompting any student
responses.
• Increased rates of
opportunities to respond
support student on-task
behavior and correct
responses while decreasing
disruptive behavior 10
10 Carnine, 1976; Heward, 2006; Skinner, Pappas & Davis, 2005; Sutherland, Alder, & Gunter, 2003; Sutherland & Wehby, 2001; West & Sloane, 1986
• Teacher use of opportunities
to respond also improves
reading performance (e.g.,
increased percentage of
responses and fluency)11
11 Skinner, Belfior, Mace, Williams-Wilson, & Johns, 1997
and mathematics
performance (e.g., rate of
calculation, problems
completed, correct
responses)12
12 Carnine, 1976; Logan & Skinner, 1998; Skinner, Smith, & McLean, 1994
Module: http://pbismissouri.org/a
rchives/1306
Videos: http://louisville.edu/educ
ation/abri/primarylevel/otr/group
http://louisville.edu/education/ab
ri/primarylevel/practice/group
http://pbismissouri.org/archives/1306
http://pbismissouri.org/archives/1306
http://louisville.edu/education/abri/primarylevel/otr/group
http://louisville.edu/education/abri/primarylevel/otr/group
http://louisville.edu/education/abri/primarylevel/practice/group
http://louisville.edu/education/abri/primarylevel/practice/group
12 | P a g e
2.3 ACKNOWLEDGMEN
T
USE BEHAVIOR-SPECIFIC PRAISE
Description and Critical
Features
What key strategies can I use
to support behavior in my
classroom?
Elementary Examples
How can I use this practice in
my elementary classroom?
Secondary Examples
How can I use this practice in
my secondary classroom?
Non-Examples
What should I avoid when I’m
implementing this practice?
Empirical Support and
Resources
What evidence supports this
practice, and where can I find
additional resources?
Verbal statement that names the
behavior explicitly and includes a
statement that shows approval
• May be directed toward an
individual or group
• Praise should be provided
soon after behavior,
understandable, meaningful,
and sincere
• Deliver approximately five
praise statements for every
one corrective statement
• Consider student
characteristics (age,
preferences) when delivering
behavior-specific praise, and
adjust accordingly (e.g.,
praise privately versus
publicly)
• Following a transition where
students quietly listened to
instructions, “You did a great
job sitting quietly and
listening for what to do
next.”
• During educator-directed
instruction, a student raises
her hand. The educator says,
“Thank you for raising your
hand.”
• The educator walks over to a
student and whispers,
“Thank you for coming into
the room quietly.”
• “Blue Group, I really like the
way you all handed in your
projects on time. It was a
complicated project.”
• “Tamara, thank you for
being on time. That is the
fourth day in a row,
impressive.”
• After pulling a chair up next
to Steve, the teacher states,
“I really appreciate how you
facilitated your group
discussion. There were a lot
of opinions, and you
managed them well.”
• After reviewing a student’s
essay, the teacher writes,
“Nice organization. You’re
using the strategies we
discussed in your writing!”
• “Great job! Super! Wow!”
(These are general, not
specific, praise statements.)
• “Brandi, I like how you
raised your hand.” (Two
minutes later) “Brandi, that
was a nice response.” (This
is praising the same student
over and over again while
ignoring other students.)
• A teacher says “Nice hand
raise.” After yelling at 20
students in a row for talking
out. (This is not maintaining
a five praises to one
correction ratio.)
• “Thank you for trying to act
like a human.” (This, at best,
is sarcasm, not genuine
praise.)
• Contingent praise is
associated with increases in
a variety of behavioral and
academic skills13
13 Partin, Robertson, Maggin, Oliver, & Wehby, 2010
• Behavior-specific praise has
an impact in both special
and general education
settings 14
14 Ferguson & Houghton, 1992; Sutherland, Wehby, & Copeland, 2000
• Reinforcement should
happen frequently and at a
minimal ratio of five praise
statements for every one
correction15
15 Broden, Bruce, Mitchell, Carter, & Hall, 1970; Craft, Alber, Heward, 1998; Wilcox, Newman, & Pitchford, 1988
Module:
http://pbismissouri.org/archives/1
300
Video:
http://louisville.edu/education/abr
i/primarylevel/praise/group
Other resources:
http://www.interventioncentral.or
g/behavioral-
interventions/motivation/teacher-
praise-efficient-tool-motivate-
students
http://pbismissouri.org/archives/1300
http://pbismissouri.org/archives/1300
http://louisville.edu/education/abri/primarylevel/praise/group
http://louisville.edu/education/abri/primarylevel/praise/group
http://www.interventioncentral.org/behavioral-interventions/motivation/teacher-praise-efficient-tool-motivate-students
http://www.interventioncentral.org/behavioral-interventions/motivation/teacher-praise-efficient-tool-motivate-students
http://www.interventioncentral.org/behavioral-interventions/motivation/teacher-praise-efficient-tool-motivate-students
http://www.interventioncentral.org/behavioral-interventions/motivation/teacher-praise-efficient-tool-motivate-students
http://www.interventioncentral.org/behavioral-interventions/motivation/teacher-praise-efficient-tool-motivate-students
13 | P a g e
2.3 ACKNOWLEDGMENT (CONTINUED)
USE OTHER STRATEGIES TO ACKNOWLEDGE STUDENT BEHAVIOR
Description and Critical
Features
What key strategies can I use
to support behavior in my
classroom?
Elementary Examples
How can I use this practice in
my elementary classroom?
Secondary Examples
How can I use this practice in my
secondary classroom?
Non-Examples
What should I avoid when I’m
implementing this practice?
Empirical Support and
Resources
What evidence supports this
practice, and where can I find
additional resources?
Behavior contracts:
Documenting an agreement
between a teacher and
student(s) about: (a) expected
behavior, (b) available supports
to encourage expected behavior,
(c) rewards earned contingent on
expected behavior, and (d)
consequences if expected
behavior does not occur (or if
undesired behavior does occur)
Group contingencies: All
students have the opportunity to
meet the same expectation and
earn the same reward; the award
may be delivered: (a) to all
students when one or a few
students meet the criterion
(dependent), to all students if
all students meet the criterion
(inter-dependent), or to each
student if the student meets the
criterion (independent)
Token Economies: Delivering a
token (e.g., pretend coin, poker
chip, points, tally mark, stamp)
contingent on appropriate
behavior that is exchangeable for
a back-up item or activity of
value to students
Behavior contracts: At the
beginning of the year, Mrs.
Gaines’s students sign a class
constitution; the document
specifies: (a) the expected
behavior (be safe, respectful,
and responsible), (b) supports
to be provided (reminders), (c)
rewards (earn Friday fun time),
and (d) consequences (try
again for next week)
Group contingencies: All
students will hand in homework
#2 by the due date; if we meet
this goal, next Friday we will
play State Bingo instead of
having a formal test review
Token economies: Thanks to
each student who worked
quietly on the mathematics task
for the past 10 minutes—that’s
responsible behavior! Each of
you earned a “star buck” to use
in the school-wide store
Behavior contracts: At the
beginning of each semester, Dr.
Gale has his students sign an
integrity pledge. It states that
students will complete their work
independently (expected
behavior), with teacher help
when needed (supports), to have
the potential of earning full points
on assignments (rewards). If
students do not maintain
integrity, they will lose points on
that assignment and in the
course.
Group contingencies: As a
class, we will generate five
questions that are examples of
“Synthesis.” If we can meet this
goal by 2:15, I will allow you to
sit where you would like (keeping
class expectations in mind) for
the last 20 minutes of the class
period.
Token economies: Alyiah, you
were very respectful when your
peer came in and asked for
space. You’ve earned 10 bonus
points toward your behavior goal.
Well done!
Behavior contracts: At Smith
Middle School, students sign a
contract stating that engaging in a
“zero tolerance offense” results in
losing all school-based privileges and
may result in being suspended or
expelled. They are not reminded of
this contract unless a violation occurs,
in which case they are typically
expelled—even if the violation was not
severe (e.g., bringing a dull plastic
knife in their lunch to cut an
apple). (This is not focused on
desired behavior and does rewards
or supports) not include
Group contingencies: Making the
goal unattainable (e.g., all students
will display perfect behavior all year),
using a reward you cannot deliver
(e.g., day off on Friday), or pointing
out to the entire group when a
student is detracting from group.
Using rewards to encourage
students to engage in behaviors
that are not in their best interest
(this is bribing)
Token economies: Providing
points or tokens without specific
praise or to the same students or
groups of students or providing
tokens or points without
demonstrated behaviors
When implemented
appropriately, behavior
contracts,16
16 Drabman, Spitalnik, & O’Leary, 1973; Kelley & Stokes, 1984; White-Blackburn, Semb, & Semb, 1977; Williams & Anandam, 1973
group
contingencies,17
17 Barrish, Saunders, & Wolf, 1969; Hansen & Lignugaris-Kraft, 2005; Yarborough, Skinner, Lee, & Lemmons, 2004
and token
economies 18
18 Jones & Kazdin, 1975; Main & Munro, 1977; McCullagh & Vaal, 1975
result in increases in
desired behavior
Modules:
http://
iris.peabody.vanderbilt.edu/
module/bi1/
http://
iris.peabody.vanderbilt.edu/
module/bi2/
http://pbismissouri.org/
archives/1300
Case studies:
https://
iris.peabody.vanderbilt.edu/wp-
content/uploads/
pdf_case_studies/
ics_encappbeh
Other resources:
http://www.interventioncentral.or
g/behavioral-
interventions/rewards/jackpot-
ideas-classroom-rewards
Classroom Behavior Management (Part 1): Key Concepts and Foundational Practices
http://iris.peabody.vanderbilt.edu/module/bi1/
http://iris.peabody.vanderbilt.edu/module/bi1/
http://iris.peabody.vanderbilt.edu/module/bi1/
http://iris.peabody.vanderbilt.edu/module/bi2/
http://iris.peabody.vanderbilt.edu/module/bi2/
http://pbismissouri.org/archives/1300
http://iris.peabody.vanderbilt.edu/wp-content/uploads/2013/07/ICS-005
https://iris.peabody.vanderbilt.edu/wp-content/uploads/pdf_case_studies/ics_encappbeh
http://iris.peabody.vanderbilt.edu/module/bi2/
http://www.interventioncentral.org/behavioral-interventions/rewards/jackpot-ideas-classroom-rewards
14 | P a g e
2.4 PROMPTS AND PRECORRECTIONS
MAKE THE PROBLEM BEHAVIOR IRRELEVANT WITH ANTICIPATION AND REMINDERS
Description and Critical
Features
What key strategies can I use
to support behavior in my
classroom?
Elementary Examples
How can I use this practice in
my elementary classroom?
Secondary Examples
How can I use this practice in
my secondary classroom?
Non-Examples
What should I avoid when I’m
implementing this practice?
Empirical Support and
Resources
What evidence supports this
practice, and where can I find
additional resources?
Reminders that are provided
before a behavior is expected that
describes what is expected:
• Preventative: take place
before the behavior response
occurs
• Understandable: the prompt
must be understood by the
student
• Observable: the student
must distinguish when the
prompt is present
• Specific and explicit:
describe the expected
behavior (and link to the
appropriate expectation)
Teach and emphasize self-
delivered (or self-managed)
prompts
• Before students begin
seatwork, provide a
reminder about how to
access help and materials, if
needed
• Before the class transitions,
a teacher states, “Remember
to show respect during a
transition by staying to the
right and allowing personal
space”
• Pointing to table as student
enters room (to remind
where to sit)
• A student looks at a picture
sequence prompting
effective hand washing and
successfully washes hands
prior to snack or lunch
• Pointing to a sign on the
board to indicate expectation
of a silent noise level prior to
beginning independent work
time
• Review of group activity
participation rubric prior to
the start of group work
• Sign above the homework
basket with a checklist of “to
dos” for handing in
homework
• A student checks her
planner, which includes
visual prompts to write down
assigned work and bring
relevant materials home to
promote homework
completion
• While teaching a lesson, a
student calls out, and the
educator states, “Instead of
calling out, I would like you
to raise your hand” (This is
an error correction—it came
after the behavior)
• Prior to asking students to
complete a task, the
educator states, “Do a good
job,” or gives a thumb’s up
signal (This is not specific
enough to prompt a
particular behavior)
• Providing only the “nos”
(e.g., No running, No
talking) instead of describing
the desired behavior or
failing to link to expectations
• Delivering prompts and pre-
corrections for appropriate
behavior results in increases
in improved behavior 19
19 Arceneaux & Murdock, 1997; Faul, Stepensky, & Simonsen, 2012; Flood, Wilder, Flood, & Masuda, 2002; Wilder & Atwell, 2006
• Use prompts during
transitions to new routines
and for routines that are
difficult for students to
master20
20 Alberto & Troutman, 2013
Videos:
http://louisville.edu/education/abr
i/primarylevel/prompting/group
http://louisville.edu/education/abr
i/primarylevel/modeling/group
http://louisville.edu/education/abri/primarylevel/prompting/group
http://louisville.edu/education/abri/primarylevel/prompting/group
http://louisville.edu/education/abri/primarylevel/modeling/group
http://louisville.edu/education/abri/primarylevel/modeling/group
15 | P a g e
2.5 ERROR CORRECTION
USE BRIEF, CONTINGENT, AND SPECIFIC ERROR CORRECTIONS TO RESPOND TO PROBLEM BEHAVIOR
Description and Critical
Features
What key strategies can I use
to support behavior in my
classroom?
Elementary Examples
How can I use this practice in
my elementary classroom?
Secondary Examples
How can I use this practice in
my secondary classroom?
Non-Examples
What should I avoid when I’m
implementing this practice?
Empirical Support and
Resources
What evidence supports this
practice, and where can I find
additional resources?
• An informative statement,
typically provided by the
teacher, that is given when
an undesired behavior
occurs, states the observed
behavior, and tells the
student exactly what the
student should do in the
future
• Delivered in a brief, concise,
calm, and respectful
manner, typically in private
• Pair with specific contingent
praise after the student
engages in appropriate
behavior
• Disengage at end of error
correction and redirection—
avoid “power struggles”
• After a student calls out in
class the teacher responds,
“Please raise your hand
before calling out your
answer”
• After students are talking
too loudly during group
work, the teacher responds,
“Please use a quieter
whisper voice while working
with your partner”
• After a student is out of his
or her seat inappropriately,
the teacher responds,
“Please stop walking around
the room and return to your
seat to finish your work”
• When a student has not
started working within
one minute, “Jason,
please begin your writing
assignment” (Later) “Nice
job being responsible,
Jason, you have begun
your assignment”
• After student is playing
with lab equipment
inappropriately, the
teacher responds, “Please
stop playing with lab
equipment, and keep it on
the table” (Later) “Thank
you for being safe with
the lab equipment”
• Shouting “No!” (This is not
calm, neutral, or specific)
• A five-minute conversation
about what the student
was thinking (This is not
brief)
• A teacher loudly tells a
student that he is not
being responsible (This is
not calm or private)
• After providing an error
correction, a student
denies engaging in the
behavior; the teacher
repeats the correction in
an escalated tone and
continues to debate the
student—each exchange
escalates until shouting
ensues (This is a power
struggle)
• Error corrections that are direct,
immediate, and end with the
student displaying the correct
response are highly effective in
decreasing undesired behaviors
(errors) and increasing future
success rates21
21 Abramowitz, O’Leary, & Futtersak, 1988; Acker & O’Leary, 1988; Baker, 1992; Barbetta, Heward, Bradley, & Miller, 1994; Brush & Camp, 1998; Kalla, Downes, & vann de Broek,
2001; McAllister, Stachowiak, Baer, & Conderman, 1969; Singh, 1990; Singh & Singh, 1986; Winett & Vachon, 1974
Error correction
article: http://link.springer.com/articl
e/10.1007/BF02110516
Strategies to interrupt/avoid power
struggles:
http://www.interventioncentral.org/
behavioral-interventions/challenging-
students/dodging-power-struggle-
trap-ideas-teachers
Video:
http://louisville.edu/education/abri/pr
imarylevel/correction/group
http://link.springer.com/article/10.1007/BF02110516
http://link.springer.com/article/10.1007/BF02110516
http://www.interventioncentral.org/behavioral-interventions/challenging-students/dodging-power-struggle-trap-ideas-teachers
http://www.interventioncentral.org/behavioral-interventions/challenging-students/dodging-power-struggle-trap-ideas-teachers
http://www.interventioncentral.org/behavioral-interventions/challenging-students/dodging-power-struggle-trap-ideas-teachers
http://www.interventioncentral.org/behavioral-interventions/challenging-students/dodging-power-struggle-trap-ideas-teachers
http://louisville.edu/education/abri/primarylevel/correction/group
http://louisville.edu/education/abri/primarylevel/correction/group
16 | P a g e
2.6 USE OTHER STRATEGIES TO RESPOND TO PROBLEM BEHAVIOR
WHEN SELECTING STRATEGIES, RECALL THE PURPOSE OF EFFECTIVE CONSEQUENCES: (A) PREEMPT ESCALATION, (B) MINIMIZE INADVERTENT REWARD OF PROBLEM
BEHAVIOR, (C) CREATE LEARNING OPPORTUNITY FOR EMPHASIZING DESIRED BEHAVIOR, AND (D) MAINTAIN INSTRUCTIONAL TIME TO THE REMAINDER OF THE CLASS
Description and Critical
Features
What key strategies can I use
to support behavior in my
classroom?
Elementary Examples
How can I use this practice in
my elementary classroom?
Secondary Examples
How can I use this practice in
my secondary classroom?
Non-Examples
What should I avoid when I’m
implementing this practice?
Empirical Support and
Resources
What evidence supports this
practice, and where can I find
additional resources?
Planned ignoring:
Systematically withholding
attention from a student when he
or she exhibits minor undesired
behavior that is maintained
(reinforced) by teacher attention
Planned ignoring:
During a whole-group activity,
James shouts the teacher’s name
to get her attention. The teacher
ignores the callouts and proceeds
with the activity
Planned ignoring:
During a lecture, Jen interrupts
the teacher and loudly asks her
question; the teacher ignores Jen
until she quietly raises her hand
Planned ignoring:
A student is loudly criticizing a
peer, resulting in other students
laughing at the targeted peer; the
teacher does nothing
(This is not minor and results in
peer attention)
Planned ignoring,22
22 Hall, Lund, & Jackson, 1968; Madsen, Becker, & Thomas, 1968; Yawkey, 1971
differential
reinforcement,23
23 Deitz, Repp, & Deitz, 1976; Didden, de Moor, & Bruyns, 1997; Repp, Deitz, & Deitz, 1976; Zwald & Gresham, 1982
response cost,24
24 Forman, 1980; Greene & Pratt, 1972; Trice & Parker, 1983
and time-out from
reinforcement 25
25 Barton, Brulle, & Repp, 1987; Foxx & Shapiro, 1978; Ritschl, Mongrella, & Presbie, 1972
are all proven
strategies to reduce problem
behavior
Module:
http://pbismissouri.org/archives/1
302
Video:
http://louisville.edu/education/abr
i/primarylevel/correction
Podcast:
Part I:
Part II:
Other resources:
http://www.interventioncentral.or
g/behavioral-
interventions/challenging-
students/behavior-contracts
Differential reinforcement:
Systematically reinforcing:
• Lower rates of problem
behavior (differential
reinforcement of low rates
of behavior [DRL])
• Other behaviors (differential
reinforcement of other
behavior [DRO])
• An alternative appropriate
behavior (differential
reinforcement of alternative
behavior [DRA])
• A physically incompatible
appropriate behavior
(differential reinforcement of
incompatible behavior
[DRI])
Differential reinforcement:
In the same scenario above, the
teacher ignores James’s callouts,
models a previously taught
attention-getting skill (e.g., hand
raise), and immediately gives
attention (calls on and praises) to
James when he raises his hand:
“That’s how we show respect!
Nice hand raise.” (DRA)
When providing instructions prior
to a transition, the teacher asks
students to hold a “bubble” in
their mouths (i.e., fill cheeks with
air), which is physically
incompatible with talking (DRI)
Differential reinforcement:
The teacher privately conferences
with a student and says, “I really
value your contributions, but we
need your peers to also have a
chance to participate in the
group. If you can reduce your
contributions to five or fewer, I’d
love to meet with you over lunch
to talk about the rest of your
ideas.” (DRL)
If we can make it through this
discussion without inappropriate
language, you can listen to music
during your independent work
time at the end of class (DRO)
Differential reinforcement:
The teacher reprimands students
each time they engage in
problem behavior and ignores
appropriate behavior
(This is the exact opposite of how
differential reinforcement should
be used)
http://pbismissouri.org/archives/1302
http://pbismissouri.org/archives/1302
http://louisville.edu/education/abri/primarylevel/correction
http://louisville.edu/education/abri/primarylevel/correction
http://www.interventioncentral.org/behavioral-interventions/challenging-students/behavior-contracts
http://www.interventioncentral.org/behavioral-interventions/challenging-students/behavior-contracts
http://www.interventioncentral.org/behavioral-interventions/challenging-students/behavior-contracts
http://www.interventioncentral.org/behavioral-interventions/challenging-students/behavior-contracts
17 | P a g e
2.6 USE OTHER STRATEGIES TO RESPOND TO PROBLEM BEHAVIOR
WHEN SELECTING STRATEGIES, RECALL THE PURPOSE OF EFFECTIVE CONSEQUENCES: (A) PREEMPT ESCALATION, (B) MINIMIZE INADVERTENT REWARD OF PROBLEM
BEHAVIOR, (C) CREATE LEARNING OPPORTUNITY FOR EMPHASIZING DESIRED BEHAVIOR, AND (D) MAINTAIN INSTRUCTIONAL TIME TO THE REMAINDER OF THE CLASS
Description and Critical
Features
What key strategies can I use
to support behavior in my
classroom?
Elementary Examples
How can I use this practice in
my elementary classroom?
Secondary Examples
How can I use this practice in
my secondary classroom?
Non-Examples
What should I avoid when I’m
implementing this practice?
Empirical Support and
Resources
What evidence supports this
practice, and where can I find
additional resources?
Response cost:
Removing something (e.g., token,
points) based upon a student’s
behavior in attempts to decrease
the behavior
Response cost:
When a student talks out, the
teacher pulls the student aside,
provides a quiet specific error
correction, and removes a marble
from his or her jar on the
teacher’s desk. The student is
then reminded how to resume
earning, and the teacher is
careful to award approximately
five marbles for every marble
removed.
Response cost:
When a student engages in
disrespectful language, the
teacher privately provides
feedback and removes a point
from the student’s point card.
The teacher is careful to provide
at least five points (and specific
praise) for every point removed
(and error correction delivered).
Response cost:
The teacher publicly flips a card
(from green to yellow to red) that
signals the student has lost
access to privileges. The teacher
loudly announces that the “card
flip” and, when asked why,
states, “you know what you did.”
(This does not provide feedback
about what the student did wrong
or how to get back on track. It is
also a public reprimand.)
Time-out from reinforcement:
Brief removal of: (a) something
preferred (e.g., activity, item) or
(b) the student from a preferred
environment based on undesired
behavior
Time-out from reinforcement:
A group of students begin
breaking the crayons they are
using on a worksheet. The
teacher collects the crayons and
provides pencils to complete the
task.
Time-out from reinforcement:
After a student knocks over a
chair in the cafeteria in
frustration, the teacher removes
the student from her normal
lunch table and reviews
expectations with the student
before allowing her to resume
activities.
Time-out from reinforcement:
The teacher sends the student
from a difficult class the student
does not like to in-school
suspension, which is facilitated by
a preferred adult and often
attended by preferred peers for
the remainder of the day.
(This is not brief, and the student
was not removed from a
reinforcing environment—the
student was sent to a potentially
reinforcing environment.)
18 | P a g e
Table 3. Matrix of Data Systems for Classroom Interventions and Supports
3.1–3.4 DATA SYSTEMS
Data Collection Strategy
What key strategies can I use to collect
data on student behavior in my
classroom?
Tools and Resources for Data
Collection Method
How can I use this to efficiently track
student behavior in my classroom?
Conditions and Examples
For what types of behaviors will this
strategy be appropriate?
Non-Examples of Use
For what types of behaviors will this
strategy be inappropriate?
3.1 Counting behaviors:
Record or document how often or how
many times a behavior occurs (frequency)
within a specified period of time; convert
to rate by dividing count by time (minutes
or hours) observed
• Moving paper clips from one pocket
to the next
• Keeping paper-and-pencil tally
• Using a counter (like counter used for
golf)
• App on smartphone or tablet
Behaviors that are discrete (clear
beginning and end), countable (low
enough frequency to count), and
consistent (each incident of behavior is
of similar duration)
Ex am ples:
• How often a student swears in class
• How many talk-outs versus hand
raises occur during a lesson
Behaviors that are not discrete (unclear
when behavior begins or ends), countable
(occur too rapidly to count), or consistent
(e.g., behavior lasts for varying amounts
of time)
N on-ex am ples:
• How many times a student is off task
(likely not discrete or consistent)
• How often a student is out of seat
(likely not consistent)
3.2 Timing:
Record or document how long: (a) a
behavior lasts (duration from beginning to
end), (b) it takes for a behavior to start
following an antecedent (latency), or (c)
how much time elapses between
behaviors (inter-response time)
• Timer or clock (and recording the
time with paper and pencil)
• App on smartphone or tablet
• Use of vibrating timer (e.g.,
MotivAiders®)
Behaviors that are discrete (clear
beginning and end) and directly
observed
Ex am ples:
• How long a student spends walking
around the classroom (duration of
out of seat)
• How long it takes a student to begin
working after work is assigned
(latency to on task)
• How long it takes a student start the
next problem after finishing the last
one (inter-response time)
Behaviors that are not discrete (clear
beginning and end) or directly observed
N on-ex am ples:
• How long it takes a student to say an
inappropriate four-letter word
(duration is not the most critical thing
to measure)
• How long a student is off task (if the
behavior is not discrete; that is if the
behavior does not have a clear
beginning and end)
19 | P a g e
3.1–3.4 DATA SYSTEMS
Data Collection Strategy
What key strategies can I use to collect
data on student behavior in my
classroom?
Tools and Resources for Data
Collection Method
How can I use this strategy to
efficiently track student behavior in my
classroom?
Conditions and Examples
For what types of behaviors will this
strategy be appropriate?
Non-Examples of Use
For what types of behaviors will this
strategy be inappropriate?
3.3 Sampling:
Estimating how often a behavior occurs by
recording whether it happened during part
of an interval (partial interval), during the
whole interval (whole interval), or at the
end of the interval (momentary time
sampling)
Shorter intervals lead to more precise
measurement
Partial interval is appropriate for shorter
and more frequent behaviors; whole
interval is appropriate for longer
behaviors; and momentary time sampling
facilitates multi-tasking (you record at the
end of the interval)
Create a table, with each box representing
a time interval (e.g., 30 seconds), and
decide how you will estimate (partial,
whole, momentary time sampling); use a
stopwatch or app to track each interval,
and record following your decision rule
Behaviors that are not discrete (unclear
when behavior begins or ends), countable
(occur too rapidly to count), or consistent
(e.g., behavior lasts for varying amounts
of time)
Ex am ples:
• An estimate of how often a student is
off task (percentage of intervals off
task)
• An estimate of how often a student is
out of seat (percentage of intervals
out of seat)
Behaviors that are discrete (clear
beginning and end), countable (low
enough frequency to count), and
consistent (each incident of behavior is of
similar duration)
N on-ex am ples:
• How often a student swears in class
(you could count this)
• How many talk-outs versus hand
raises occur during a lesson (you
could count this)
3.4 Antecedent-Behavior-
Consequence (ABC) cards, incident
reports, or office discipline referrals:
Record information about the events that
occurred before, during, or after a
behavioral incident
Paper-and-pencil notes on pre-populated
forms
Electronic data collection method (e.g.,
SWIS, Google Docs, other database tool)
Behaviors that are discrete (clear
beginning and end), countable (low
enough frequency to count), and both
behavior and context are
directly observed or assessed
Ex am ples:
• A tantrum (cluster of behaviors)
where staff saw what preceded and
followed
• A fight among peers where the vice
principal was able to gather
information about what happened
before and after by interviewing
students
Behaviors that are not discrete (clear
beginning and end), countable (low
enough frequency to count), and/or both
behavior and context are not directly
observed
N on-ex am ples:
• How often a student swears (count)
• How long a student pauses between
assignments (measure inter-response
time)
20 | P a g e
Additional Tools for Teachers
In addition to using the evidence-based strategies provided in the prior interactive map, self-assessment, and detailed
tables, teachers should apply the following strategy and consider the following guidelines when responding to students’
challenging behavior.
Responding to Behaviors in the Classroom—M ak e I t FAST!
F
Functional
A
Accurate
S
Specific
T
Timely
Responding to behavior in a way
that tries to address the reason or
purpose why a student behaves
within specific situations will help
reduce the likelihood of the
behavior happening in the future
(see Practical FBA Training Manual
for more information)
As much as possible, an accurate
and consistent response is
essential to minimizing problem
behavior and increasing compliant
behaviors
It is best to be as specific as
possible when addressing student
behavior; using the student’s
name and the reason for the
response are examples of how
teachers can be specific
Responding to behavior
immediately after the behavior will
make the response more powerful
Types of Behavior and Common Responses
Appropriate or expected
behavior
Infrequent and non-disruptive
minor behaviors
Repeated and non-disruptive
minor behavior errors and/or
disruptive major behavior errors
Administrator-managed
behaviors
• When a student does an
appropriate behavior, let the
student know by telling the
student what he or she did and
how that behavior aligns with the
related school-wide expectation
• Be as specific as possible, and try
to always use the student’s name
• Consider using praise with other
acknowledgment strategies
• When a misbehavior occurs, try
to draw as little attention to the
behavior as possible
• Give students reminders of what
is expected
• Model what is expected
• Reinforce what is expected by
using specific praise or other
acknowledgment strategies
• Follow school procedures for
responding to rule violations and
individualized behavior support plans
• Try your best to anticipate when
there might be problems, let
students know what you expect,
and take some time to practice
routines
• Collect data to help establish
patterns about why behaviors are
occurring
• Follow school procedures for
responding to rule violations
and individualized behavior
support plans
http://www.pbis.org/common/cms/files/pbisresources/practicalfba_trainingmanual
21 | P a g e
SCENARIOS
The following scenarios highlight how teachers may use these classroom strategies with the decision-making guide to support student behavior in their
classrooms. The first scenario is based in an elementary school. The second scenario is based in a high school.
Scenario 1. Mr. Jorgé’s Third-Grade Classroom
Foundations of Classroom I nterventions and Supports
Mr. Jorgé invested time into carefully designing his classroom before any of his 25 third graders arrived in the fall. He carefully planned his routines—from where
students would place materials upon entering the room to where they would line up when getting ready to exit—and ensured the physical layout facilitated
students engaging in routines. He also defined what it looked like for students to follow the school-wide expectations (Safety, Respect, and Responsibility), which
were agreed upon by the faculty and documented in a school-wide matrix, in the context of each of his classroom routines (using an expectations-within-routines
matrix). On the first day of school, Mr. Jorgé greeted students at the door, introduced himself, and invited students into their shared learning environment. He
spent the better part of the first day explicitly teaching the expectations within his classroom routines and establishing his classroom as a positive learning
environment. Throughout the day, he systematically recognized each student who followed the expectations with specific praise (e.g., “Julie, remembering to
bring your materials was really responsible. That’s a great way to start the year!”). He also wrote and invited students to sign a “Classroom Constitution” (also
known as a behavior contract).
Mr. Jorgé’s Classroom Constitution (w ith strategies in parentheses)
Members of our classroom community are respectful, responsible, and safe (expectations). Mr. Jorgé will support us by teaching us what
this looks like during activities (explicit instruction), providing daily reminders (prompts), and letting us know how we are doing (specific
feedback). If we are able to do this most of the time (during 80 percent of sampled opportunities when the mystery timer goes off) each
day, we will earn 10 minutes of quiet music time at the end of each day (group contingency). During this time, we can start on
homework, read a book, or do a quiet activity with a friend while listening to music. If we aren’t able to do this most of the time, we will
spend the 10 minutes reviewing our classroom expectations so that we can have a better day tomorrow.
Consistent im plem entation of positive and proactive practices
After the first day, Mr. Jorgé kept up his part of the Classroom Constitution. He greeted students every morning, provided reminders about expected behavior at
the beginning of each activity, ensured his lessons were engaging and included multiple opportunities for students to respond and participate, and gave students
specific feedback when they were doing well. He also found that most students were consistently demonstrating expected behavior.
M inor problem behaviors
Occasionally, a student would engage in minor problem behavior. For example, a student sometimes called out when Mr. Jorgé was teaching rather than
remembering to raise a quiet hand. Rather than getting upset, Mr. Jorgé remembered that this was just an error, much like a student saying that 2 + 2 = 5, and
he could simply correct it. For these minor problem behaviors, Mr. Jorgé let students know their behavior was not appropriate, reminded them what was
expected, and gave them an opportunity to practice and earn positive feedback (e.g., “Jeff, remember to raise your hand rather than call out. Let’s try that again.”
22 | P a g e
After Jeff quietly raises his hand, “Thanks for raising your hand. Now what did you want to share?”). For most students, this quick error correction helped them
get back on track and meet classroom expectations most of the time.
M any students engaging in m ore chronic or serious behavior
In early December, all students had missed more than a week of school due to an intense storm. They returned to school as winter break was approaching, and
many routines were disrupted due to these planned and unplanned schedule changes. Mr. Jorgé noticed that many of his students were engaging in consistent
disruptive behavior and his reminders were not sufficient. Therefore, he decided to enhance his classroom strategies. He retaught expected behavior, revisited his
Classroom Constitution, increased how often he provided reminders, and introduced a new incentive: Each student who was engaged in expected behavior when
the mystery timer went off (a kitchen timer Mr. Jorgé would set for 15 to 20 minutes) would earn a ticket, which they could use to purchase “gift cards” for
classroom privileges (e.g., homework pass, photocopying privileges, lunch with Mr. Jorgé in the classroom) at the end of the week. With these added supports,
the majority of students were again engaging in expected behavior.
Few students engaging in chronic or serious problem behavior 26
Despite his intensified intervention approach, Mr. Jorgé noticed that one student, Rob, was starting to display intense levels of behavior. Rob was frequently out of
his seat, and he would often disrupt the learning of his peers by pushing their materials off of their desks when he walked by, calling his peers (and occasionally
Mr. Jorgé) names under his breath, and shouting out repeatedly when Mr. Jorgé was teaching. Mr. Jorgé collected some information. He noted whether Rob was
in or out of his seat at the end of each minute during the 20-minute writing lesson (when Mr. Jorgé had noticed that Rob’s behavior was the most problematic).
After documenting that Rob was out of his seat during 85 percent of observed intervals, taking notes on some of the concerning things Rob was saying, and
calculating that Rob was at risk for not meeting grade-level standards, Mr. Jorgé brought his concerns (and data) to the Student Assistance Team. The team
decided that Rob may need more comprehensive supports and contacted Rob’s parents to obtain consent for further evaluation. After getting parental consent, a
team (including the school’s behavioral expert, Rob’s dad, and Mr. Jorgé) was formed to support Rob’s evaluation and intervention. Mr. Jorgé provided information
to support the evaluation (e.g., interview responses, classroom data), and he worked with the team to develop and implement a plan to support Rob’s behavior.
26 See additional resources for Tier 2 or Tier 3 support:
o https://www.pbis.org/training/coach-and-trainer/fba-to-bsp
o http://www.pbis.org/common/cms/files/pbisresources/TrainerManual
o http://iris.peabody.vanderbilt.edu/module/fba/
https://www.pbis.org/training/coach-and-trainer/fba-to-bsp
http://www.pbis.org/common/cms/files/pbisresources/TrainerManual
23 | P a g e
Scenario 2. Dr. Rubert’s Ninth-Grade Science Class
Foundations of Classroom I nterventions and Supports
Dr. Rubert had been teaching freshman science for 15 years when she first heard about the importance of a multi-tiered behavior framework to address behavior
in the same way her school had addressed academics. Although she had always emphasized safety in her lab, she recognized that she may have been more
reactive than proactive. Therefore, she decided to embrace this new approach and rethink her classroom. Before the start of her 16th school year, Dr. R (as her
students called her) revisited the physical design of her classroom and lab. She ensured materials were stored safely and the furniture allowed students to
efficiently transition from desks to lab tables and back again. She clearly reviewed her routines and posted reminders of key routines in important places in the
room. In addition to posting and teaching the school-wide expected behavior matrix, she further defined the same school-wide expectations (safety, respect, and
achievement) for her three main classroom routines in her classroom matrix (below).
Dr. R’s Rules
Lecture Lab Seatwork
Safety
• Keep body and materials to self
• Ensure walkways are clear
• Take note of safety instructions for lab
• Use materials for their intended
purpose
• Wear protective equipment
• Use the safety procedures specified for
each lab
• Keep body and materials to self
• Ensure walkways are clear
• Sit to maximize circulation (and
attention)
Respect • Actively listen to lecture
• Keep your eyes and ears focused on
Dr. R
• Assign roles for each lab partner, and
clearly communicate plan and actions
• Check in with lab partner regarding
progress and roles
• Do your own work
• Maintain a quiet work environment
• Quietly raise your hand if you need the
teacher’s attention
Achievement • Use guided notes to document critical
content
• Highlight information to review for
homework
• Complete lab work efficiently
• Document your process and outcomes
• Submit lab reports when due
• Do your best work
• Ask for help when needed
• Ensure you take any unfinished work
home and turn in the next day
On the first day of the fall semester, Dr. R greeted her students at the door and began her first lecture of the year. She reminded students of the school-wide
expectations, showed a student-created video about how to demonstrate safety, respect, and achievement in the classroom (as all teachers were doing), and then
further described what the expectations looked like during her lectures. She involved students in a quick check, where she read scenarios and asked if students in
the scenario were meeting (or not meeting) each expectation. Then, she delivered the rest of her intro lecture and noted (using her electronic grade book app)
which students were displaying expected behavior and which students were not. She repeated this process the first time she introduced lab and seatwork and
periodically throughout the year.
24 | P a g e
Consistent im plem entation of positive and proactive practices
Each day, Dr. R greeted her students at the door, reminded them to get started on the activity listed on the interactive whiteboard, and provided any needed
reminders about expectations for each new lab activity. She worked to make sure her lectures were engaging and provided students with guided notes (outlines
or fill-in-the-blank notes) to ensure they stayed on task. She also designed any in-class seatwork or homework activities to include review problems interspersed
with slightly more challenging application exercises. In addition, she consistently gave students specific feedback when they were engaging in expected
appropriate behavior (e.g., “Thanks for handling those materials safely. I can see you are ready for more advanced labs.”).
M inor problem behaviors
Occasionally, students would engage in minor problem behaviors. For example, during a transition, a couple of students were using their fingers like hockey sticks
and plastic petri dishes as pucks on a lab table. She took a breath, resisting the urge to react with a harsh or loud tone, and instead reminded them how to use
materials safely. She had them show her where the dishes should be stored when not in use, and she thanked them for getting back on track so that she could
finish setting up their lab.
M any students engaging in m ore chronic or serious behavior
As spring approached, Dr. R was starting to introduce more advanced lab experiences. However, students’ schedules were frequently disrupted by various
activities (e.g., field trips, spring fling), and she was seeing increased rates of inappropriate behavior. For example, when she first introduced Bunsen burners, a
few students played with the burners (while they were turned off) as though they were light sabers—playfully clinking the burners together. Other students
laughed and made fun of Dr. R when she tried to gently correct them. She decided it was time to revisit expectations. She also decided to introduce a classroom
contingency regarding safe lab behavior. Specifically, she let students know that if they could be safe during all lab activities, they could do a “fun” lab at the end
of each two-week unit. If there was one instance of significantly unsafe behavior (i.e., something that could put someone at risk of injury), then all labs were
suspended until students could: (a) pass a safety quiz, (b) demonstrate safe operation of lab equipment, and (c) sign a contract committing to using all materials
safely. With the added review, ongoing reminders, and group contingency, students were back on track with appropriate behavior.
Few students engaging in chronic or serious problem behavior
Despite her best efforts at being proactive, one of Dr. R’s students was starting to concern her. Rachel was a student who seemed to keep to herself. When Dr. R
or a peer tried to approach her, Rachel would often stare blankly, make a rude comment, or turn and walk away. Initially, Dr. R just tried to give her space. But,
by October, she realized that Rachel’s behaviors were not improving. Although it was easy to ignore (Rachel never disrupted the class), after chatting with a
colleague in the languages department, Dr. R found out that Rachel was at risk of failing at least two of her courses. Dr. R also walked through the cafeteria and
saw Rachel sitting outside alone. Dr. R brought her concerns to the vice principal assigned to the 9th and 10th grades, and he pulled Rachel’s attendance and
academic records. It turned out that Rachel was chronically late to first period, had missed more than the “allowed” days, and was at risk for failing five (not just
two) classes. (However, she had earned a 4.0 prior to this semester and had received numerous positive comments from teachers in past school records about
her engaging personality.) Dr. R and the vice principal also reviewed the school-wide screening data and noted that Rachel was higher than average on measures
of internalizing behaviors. Given data supporting her initial concerns, Dr. R decided to refer Rachel to the intensive intervention team, who reviewed data for
Rachel, called her parents, talked with Rachel, and decided to proceed with conducting a functional behavioral assessment and developing an individualized
behavior intervention plan. The team also considered more intensive supports to be developed in collaboration with Rachel and her family using a wraparound
process. Dr. R continued to provide additional supports in class, but she was glad that she had noticed Rachel and that Rachel was getting the support she
needed.
25 | P a g e
SUMMARY OF CLASSROOM INTERVENTIONS AND SUPPORTS
These classroom strategies should be useful to all educators to achieve positive outcomes for all students, including students who have various abilities, are from
diverse backgrounds, and who are educated in a range of settings. Although positive and preventative strategies are emphasized, some students may require
additional behavior supports. As such, a number of important assumptions must be considered:
• Students and behaviors are not “bad.” Instead, students engage in behaviors that are inappropriate or problematic for a given context or culture.
• Students engage in behaviors that “work” for them (i.e., result in desired outcomes or reinforcement).
• Educators must act professionally; that is, use planned and established school and classroom procedures in manners that are calm, neutral, business like,
and contingent.
• Academic and social behaviors are taught, changed, and strengthened by similar instructional strategies (i.e., model, prompt, monitor, and reinforce).
To reiterate, the classroom strategies and recommendations in this brief are supportive of, but not sufficient for addressing, students with intense needs or crisis
responses to dangerous situations. To take full advantage of these strategies, educators are encouraged to use data to guide their selection and implementation
of strategies, monitor implementation fidelity, and integrate academic and behavior supports into a comprehensive, school-wide multi-tiered framework.
26 | P a g e
REFERENCES
Abramowitz, A. J., O’Leary, S. G., & Futtersak, M. W. (1988). The relative impact of long and short reprimands on children’s off-task behavior in the classroom. Behavior
Therapy, 19, 243–247.
Acker, M. M., & O’Leary, S. G. (1988). Effects of consistent and inconsistent feedback on inappropriate child behavior. Behavior Therapy, 19, 619–624.
Alberto, P. A., & Troutman, A. C. (2013). Applied behavior analysis for teachers (9th ed.). Upper Saddle River, NJ: Pearson Education.
Arceneaux, M. C., & Murdock, J. Y. (1997). Peer prompting reduces disruptive vocalizations of a student with developmental disabilities in a general eighth-grade
classroom. Focus on Autism and Other Developmental Disabilities, 12, 182–186.
Archer, A., & Hughes, C. (2011). Explicit instruction: Effective and efficient teaching. New York, NY: The Guilford Press.
Baker, J. D. (1992). Correcting the oral reading errors of a beginning reader. Journal of Behavioral Education, 4, 337–343.
Barbetta, P. M., Heward, W. L., Bradley, D. M., & Miller, A. D. (1994). Effects of immediate and delayed error correction on the acquisition and maintenance of sight words
by students with developmental disabilities. Journal of Applied Behavior Analysis, 27, 177–178.
Barrish, H. H., Saunders, M., & Wolf, M. M. (1969). Good behavior game: Effects of individual contingencies for group consequences on disruptive behavior in a classroom.
Journal of Applied Behavior Analysis, 2, 119–124.
Barton, L. E., Brulle, A. R., & Repp, A. C. (1987). Effects of differential scheduling of timeout to reduce maladaptive responding. Exceptional Children, 53, 351–356.
Broden, M., Bruce, C., Mitchell, M. A., Carter, V., & Hall, R. V. (1970). Effects of teacher attention on attending behavior of two boys at adjacent desks. Journal of Applied
Behavior Analysis, 3(3), 205–211.
Brophy, J. E. (2004). Motivating students to learn. Mahwah, NJ: Erlbaum.
Brush, J. A., & Camp, C. J. (1998). Using spaced retrieval as an intervention during speech-language therapy. Clinical Gerontologist, 19, 51–64.
Carnine, D. W. (1976). Effects of two teacher-presentation rates on off-task behavior, answering correctly, and participation. Journal of Applied Behavior Analysis, 9, 199–
206.
Colvin, G., Sugai, G., Good III, R. H., & Lee, Y. Y. (1997). Using active supervision and precorrection to improve transition behaviors in an elementary school. School
Psychology Quarterly, 12, 344.
Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis (2nd ed.). Upper Saddle River, NJ: Prentice Hall.
Craft, M. A., Alber, S. R., & Heward, W. L. (1998). Teaching elementary students with developmental disabilities to recruit teacher attention in a general education
classroom: Effects on teacher praise and academic productivity. Journal of Applied Behavior Analysis, 31, 399–415.
Deitz, S. M., Repp, A. C., & Deitz, D.E. (1976). Reducing inappropriate classroom behaviour of retarded students through three procedures of differential reinforcement.
Journal of Mental Deficiency Research, 20, 155–170.
DePry, R. L., & Sugai, G. (2002). The effect of active supervision and pre-correction on minor behavioral incidents in a sixth grade general education classroom. Journal of
Behavioral Education, 11(4), 255–267.
Didden, R., de Moor, J., & Bruyns, W. (1997). Effectiveness of DRO tokens in decreasing disruptive behavior in the classroom with five multiply handicapped children.
Behavioral Interventions, 12, 65–75.
Drabman, R. S., Spitalnik, R., & O’Leary, K. D. (1973). Teaching self-control to disruptive children. Journal of Abnormal Psychology, 82, 10–16.
Evertson, C. M., & Emmer, E. T. (1982). Effective management at the beginning of the school year in junior high classes. Journal of Educational Psychology, 74, 485–498.
Faul, A., Stepensky, K., & Simonsen, B. (2012). The effects of prompting appropriate behavior on the off-task behavior of two middle school students. Journal of Positive
Behavior Interventions, 14, 47–55.
27 | P a g e
Ferguson, E., & Houghton, S. (1992). The effects of teacher praise on children’s on-task behavior. Educational Studies, 18, 83–93.
Flood, W. A., Wilder, D. A., Flood, A. L., & Masuda, A. (2002). Peer-mediated reinforcement plus prompting as treatment for off-task behavior in children with attention
deficit hyperactivity disorder. Journal of Applied Behavior Analysis, 35, 199–204.
Forman, S. G. (1980). A comparison of cognitive training and response cost procedures in modifying aggressive behavior of elementary school children. Behavior Therapy,
11, 594–600.
Foxx, R. M., & Shapiro, S. T. (1978). The timeout ribbon: A nonexclusionary timeout procedure. Journal of Applied Behavior Analysis, 11, 125–136.
Good, T. L., & Brophy, J. E. (2000). Looking in classrooms. New York, NY: Longman.
Greene, R. J., & Pratt, J. J. (1972). A group contingency for individual misbehaviors in the classroom. Mental Retardation, 10, 33–35.
Hall, R. V., Lund, D., & Jackson, D. (1968). Effects of teacher attention on study behavior. Journal of Applied Behavior Analysis, 1, 1–12.
Hansen, S. D., & Lignugaris-Kraft, B. (2005). Effects of a dependent group contingency on the verbal interactions of middle school students with emotional disturbance.
Behavioral Disorders, 30, 170–184.
Heward, W. L. (2006). Exceptional children: An introduction to special education. Upper Saddle River, NJ: Pearson Education/Merrill/Prentice Hall.
Johnson, T. C., Stoner, G., & Green, S. K. (1996). Demonstrating the experimenting society model with class-wide behavior management interventions. School Psychology
Review, 25, 198–213.
Jones, R. T., & Kazdin, A. E. (1975). Programming response maintenance after withdrawing token reinforcement. Behavior Therapy, 6, 153–164.
Kalla, T., Downes, J. J., & vann de Broek, M. (2001). The pre-exposure technique: Enhancing the effects of errorless learning in the acquisition of face–name
associations. Neuropsychological Rehabilitation, 11, 1–16.
Kelley, M. L., & Stokes, T. F. (1984). Student–teacher contracting with goal setting for maintenance. Behavior Modification, 8, 223–244.
Kern, L., & Clemens, N. H. (2007). Antecedent strategies to promote appropriate classroom behavior. Psychology in the Schools, 44, 65–75. doi: 10.1002/pits.20206
Lewis, T. J., Colvin, G., & Sugai, G. (2000). The effects of pre-correction and active supervision on the recess behavior of elementary students. Education and Treatment of
Children, 23, 109–121.
Logan, P., & Skinner, C. H. (1998). Improving students’ perceptions of a mathematics assignment by increasing problem completion rates: Is problem completion a
reinforcing event? School Psychology Quarterly, 13, 322–331.
Madsen, C. H., Jr., Becker, W. C., & Thomas, D. R. (1968). Rules, praise, and ignoring: Elements of elementary classroom control. Journal of Applied Behavior Analysis, 1,
139–150.
Main, G. C., & Munro, B. C. (1977). A token reinforcement program in a public junior high-school. Journal of Applied Behavior Analysis, 1, 93–94.
McAllister, L. W., Stachowiak, J. G., Baer, D. M., & Conderman, L. (1969). The application of operant conditioning techniques in a secondary school classroom. Journal of
Applied Behavior Analysis, 2, 277–285.
McCullagh, J., & Vaal, J. (1975). A token economy in a junior high school special education classroom. School Applications of Learning Theory, 7, 1–8.
Paine, S. C., Radicchi, J., Rosellini, L. C., Deutchman, L., & Darch, C. B. (1983). Structuring your classroom for academic success. Champaign, IL: Research Press.
Partin, T. C. M., Robertson, R. E., Maggin, D. M., Oliver, R. M., & Wehby, J. H. (2010). Using teacher praise and opportunities to respond to promote appropriate student
behavior. Preventing School Failure: Alternative Education for Children and Youth, 54, 172–178.
Repp, A. C., Deitz, S. M., & Deitz, D. E. (1976). Reducing inappropriate behaviors in classrooms and in individual sessions through DRO schedules of reinforcement. Mental
Retardation, 14, 11–15.
Ritschl, C., Mongrella, J., & Presbie, R. J. (1972). Group time-out from rock and roll music and out-of-seat behavior of handicapped children while riding a school bus.
Psychological Reports, 31, 967–973.
28 | P a g e
Simonsen, B., Fairbanks, S., Briesch, A., Myers, D., & Sugai, G. (2008). Evidence-based practices in classroom management: Considerations for research to practice.
Education and Treatment of Children, 31, 351–380.
Singh, J., & Singh, N. N. (1986). Increasing oral reading proficiency. Behavior Modification, 10, 115–130.
Singh, N. N. (1990). Effects of two error correction procedures on oral reading errors. Behavior Modification, 14, 188–199
Skinner, C. H., Belfiore, P. J., Mace, H. W., Williams-Wilson, S., & Johns, G. A. (1997). Altering response topography to increase response efficiency and learning
rates. School Psychology Quarterly, 12, 54–64.
Skinner, C. H., Pappas, D. N., & Davis, K. A. (2005). Enhancing academic engagement: Providing opportunities for responding and influencing students to choose to
respond. Psychology in the Schools, 42, 389–403.
Skinner, C. H., Smith, E. S., & McLean, J. E. (1994). The effects of inter-trial interval duration on sight-word learning rates in children with behavioral disorders. Behavioral
Disorders, 19, 98–107.
Soar, R. S., & Soar, R. M. (1979). Emotional climate and management. In P. L. Peterson & H. J. Walberg (Eds.), Research on teaching: Concepts, findings, and implications
(pp. 97–119). Berkley, CA: McCutchan.
Sutherland, K. S., Alder, N., & Gunter, P. L. (2003). The effect of varying rates of opportunities to respond to academic requests on the classroom behavior of students with
EBD. Journal of Emotional and Behavioral Disorders, 11, 239–248.
Sutherland, K. S., Wehby, J. H., & Copeland, S. R. (2000). Effect of varying rates of behavior-specific praise on the on-task behavior of students with EBD. Journal of
Emotional and Behavioral Disorders, 8, 2–8.
Sutherland, K. S., Wehby, J. H., & Yoder, P. J. (2002). Examination of the relationship between teacher praise and opportunities for students with EBD to respond to
academic requests. Journal of Emotional and Behavioral Disorders, 10, 5–13.
Trice, A. D., & Parker, F. C. (1983). Decreasing adolescent swearing in an instructional setting. Education & Treatment of Children, 6, 29–35.
West, R. P., & Sloane, H. N. (1986). Teacher presentation rate and point delivery rate effects on classroom disruption, performance accuracy, and response Rate. Behavior
Modification, 10, 267–286.
White-Blackburn, G., Semb, S., & Semb, G. (1977). The effects of a good-behavior contract on the classroom behaviors of sixth-grade students. Journal of Applied Behavior
Analysis, 10, 312.
Wilcox, R., Newman, V., & Pitchford, M. (1988). Compliance training with nursery children. Educational Psychology in Practice, 4, 105–107.
Wilder, D. A., & Atwell, J. (2006). Evaluation of a guided compliance procedure to reduce noncompliance among preschool children. Behavioral Interventions, 21, 265–272.
Williams, R. L., & Anandam, K. (1973). The effect of behavior contracting on grades. Journal of Educational Research, 66, 230–236.
Winett, R. A., & Vachon, E. M. (1974). Group feedback and group contingencies in modifying behavior of fifth graders. Psychological Reports, 34, 1283–1292.
Wong, H. K., & Wong, R. T. (2009). The first days of school: How to be an effective teacher. Mountain View, VA: Wong.
Yarbrough, J. L., Skinner, C. H., Lee, Y. J., & Lemmons, C., (2004). Decreasing transition times in a second-grade classroom: Scientific support for the timely transitions
game. Journal of Applied School Psychology, 20, 85–107.
Yawkey, T. D. (1971). Conditioning independent work behavior in reading with seven-year-old children in a regular early childhood classroom. Child Study Journal, 2, 23–
34.
Zwald, L., & Gresham, F. M. (1982). Behavioral consultation in a secondary class: Using DRL to decrease negative verbal interactions. School Psychology Review, 11,
428–432.
- Supporting and Responding to Behavior: Evidence-Based Classroom Strategies for Teachers
Purpose and Description
What is the purpose of this document?
What needs to be in place before I can expect these strategies to work?
What are the principles that guide the use of these strategies in the classroom?
User Guide
What is included in this guide?
What is not included in this guide?
Where do I start?
Interactive Map of Core Features
Classroom Interventions and Supports
Self-Assessment
Decision-Making Chart
Table 1. Matrix of Foundations for Classroom Interventions and Supports
Table 2. Matrix of Practices for Classroom Interventions and Supports
Table 3. Matrix of Data Systems for Classroom Interventions and Supports
Additional Tools for Teachers
Scenarios
Scenario 1. Mr. Jorgé’s Third-Grade Classroom
Scenario 2. Dr. Rubert’s Ninth-Grade Science Class
Summary of Classroom Interventions and Supports
References
Journal of Positive Behavior Interventions
15(1) 5 –15
© 2013 Hammill Institute on Disabilities
Reprints and permission:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/1098300712440453
http://jpbi.sagepub.com
Teachers typically enter the field with inadequate training in
behavioral instruction and classroom management (Begeny
& Martens, 2006). Therefore, school leaders and special-
ized support staff (e.g., administrators, school psycholo-
gists, special educators) need to identify effective and
efficient ways to support teachers’ use of evidence-based
classroom management practices. Multiple strategies have
been explored for teacher training, including didactic train-
ing, prompting, modeling, role playing, feedback, and rein-
forcement (Allen & Forman, 1984). Across studies, the
consensus is that training alone does not result in changes in
teacher behavior (Allen & Forman, 1984; Fixsen, Naoom,
Blase, Friedman, & Wallace, 2005).
Instead, research suggests that performance feedback, in
combination with training, results in desired increases in
teachers’ use of classroom management practices (e.g.,
Abbott et al., 1998; Jeffrey, McCurdy, Ewing, & Polis,
2009; Noell, Witt, Gilbertson, Rainer, & Freeman, 1997;
Simonsen, Myers, & DeLuca, 2010). Performance feedback
involves collecting data on an individual’s behavior and
providing feedback about that behavior (Noell et al., 2005).
Although effective, performance feedback is time intensive,
and typical school resources often limit its feasibility.
Rather than relying on another individual to observe, collect
data, and provide feedback, it may be possible to train
teachers to monitor, record, and provide feedback on their
own behavior. Thus, self-management may be a potential
solution to the training to practice gap.
According to Skinner (1953), individuals manage their
own behavior in the same manner as they manage anyone
else’s—“through the manipulation of variables of which
behavior is a function” (p. 228). That is, individuals manip-
ulate the antecedents and consequences of their own behav-
ior, and they engage in other (self-management) behaviors
to make target behaviors more or less likely. Over the past
10 years, researchers have studied self-management in vari-
ous populations of adults, including adults who are obese
(Donaldson & Normand, 2009), have asthma (Caplin &
Creer, 2001; Creer, Caplin, & Holroyd, 2005; Ngamvitroj &
Kang, 2007), have depression (Rokke, Tomhave, & Jocic,
2000), and are experiencing insomnia (Creti, Libman,
Bailes, & Fichten, 2005). Generally, studies have found that
self-management interventions are related to desired behav-
ior changes in adults.
XXX10.1177/1098300712440453Simonsen et al.Journal of Positive Behavior Interventions
© 2011 Hammill Institute on Disabilities
Reprints and permission: http://www.
sagepub.com/journalsPermissions.nav
1University of Connecticut, Storrs, CT, USA
Corresponding Author:
Brandi Simonsen, University of Connecticut, Educational Psychology, 249
Glenbrook Rd., Unit 2064, Storrs, CT 06269-2064, USA
Email: brandi.simonsen@uconn.edu
Action Editor: V. Mark Durand
The Effects of Self-Monitoring on
Teachers’ Use of Specific Praise
Brandi Simonsen, PhD1, Ashley S. MacSuga, MA1,
Lindsay M. Fallon, MA1, and George Sugai, PhD1
Abstract
Teachers typically enter the field with limited training in classroom management, and research demonstrates that training
alone does not result in improved practice. Typically, researchers have relied on time-intensive training packages that include
performance feedback to improve teachers’ use of classroom management practices; however, initial evidence suggests
that self-management may be an effective and efficient alternative. In this study, the authors directly compared the effects
of three different self-monitoring conditions (tally, count, and rate) and no self-monitoring on five middle school teachers’
rate of specific praise using an alternating treatments design. The authors also included baseline and follow-up phases to
descriptively explore the effects of self-monitoring across time. Results indicate that noting each instance of specific praise
by either tallying or using a counter resulted in optimal performance, and teachers preferred using a counter. Additional
study results, limitations, and implications are discussed.
Keywords
classroom management, specific praise, teacher training, teacher self-management, teacher self-monitoring
Articles
http://crossmark.crossref.org/dialog/?doi=10.1177%2F1098300712440453&domain=pdf&date_stamp=2012-03-30
6 Journal of Positive Behavior Interventions 15(1)
Researchers have also begun to explore the use of self-
management with teachers. For example, Browder, Liberty,
Heller, and D’Huyvetters (1986) found that teachers made bet-
ter instructional decisions (i.e., choices about maintaining or
changing instructional practices based on students’ academic
performance) when they were trained to self-monitor. Self-
monitoring is noting the presence, absence, or level of a spe-
cific behavior and is one example of self-management (Cooper,
Heron, & Heward, 2007). Similarly, Allinder, Bolling, Oats,
and Gagnon (2000) found that teachers who self-monitored
made better instructional decisions that resulted in better stu-
dent performance than teachers who did not self-monitor.
Researchers have also started to examine the effects of
self-management on teachers’ use of praise (e.g., Keller,
Brady, & Taylor, 2005; Sutherland & Wehby, 2001;
Workman, Watson, & Helton, 1982). Praise is an empiri-
cally supported classroom management practice (Simonsen,
Fairbanks, Briesch, Myers, & Sugai, 2008) that can be
effective if contingent, credible, and specific (Brophy,
1981). That is, saying “Thank you for raising your hand”
immediately after a student raised her or his hand would be
more effective in increasing the likelihood of hand raising
than providing general feedback, such as “good job,” min-
utes after the desired behavior. Research has shown, for
example, that increases in teachers’ specific praise are asso-
ciated with increases in students’ on-task behavior (Chalk &
Bizo, 2004; Sutherland, Wehby, & Copeland, 2000).
Given the importance of specific praise, Sutherland and
Wehby (2001) investigated the effects of teachers’ self-
evaluation on their use of specific praise with students with
emotional and behavioral disorders. They trained a group of
teachers to use praise, audio record a segment of instruc-
tion, review the tape later in the day, and self-evaluate (i.e.,
calculate praise rates, set goals, deliver self-praise, and
graph their progress). When they compared the praise rates
of these teachers with teachers in a control group, they
found that (a) teachers in the self-evaluation group demon-
strated higher levels of praise and lower levels of repri-
mands and (b) their students gave higher levels of correct
responses. Keller et al. (2005) conducted a similar study
(using self-evaluation of audiotapes) with student teaching
interns. They also found increases in preservice teachers’
rates of specific praise as a result of self-evaluation.
Together, these studies demonstrated the positive effects of
self-evaluation on teachers’ use of praise; however, the self-
evaluation methods required time outside of instruction
(i.e., daily review of taped instruction), and researchers did
not explore alternative methods for self-evaluation.
In the present study, we directly compared the effective-
ness of three simple and efficient self-monitoring condi-
tions (tally, count, and rate) and no self-monitoring on
teachers’ use of specific praise. Specifically, this study
addressed the following research question: Which self-
monitoring strategy is associated with the highest rate of
specific praise during teacher-directed instruction for indi-
vidual middle school teachers? In addition, we explored
which strategy resulted in the highest fidelity of implemen-
tation (including measures of both adherence and accuracy)
and was preferred by each teacher. The findings of this study
add to the limited literature on teachers’ self-management of
evidence-based classroom management strategies.
Method
Setting and Participants
This study took place in an urban middle school in New
England. The year before this study, approximately half
(range 44%–66%) of the student body (N = 926 students,
Grades 5–8) scored below proficient in reading, writing,
math, or science as measured by statewide tests (http://www.
greatschools.net); and more than half (75%) of the student
body was eligible for free or reduced-price lunch (http://nces.
ed.gov). Student ethnicity was described as 62.7% Hispanic,
29.9% White, 5.6% Black, 1.5% Asian, and 0.4% American
Indian (http://nces.ed.gov).
The staff at this school (including 87.8 full-time equiva-
lent teachers) had been exposed to positive behavior sup-
port strategies across a variety of staff in-service and
professional development events. Despite this training,
some teachers continued to struggle with implementing
evidence-based classroom management strategies. The pri-
mary researchers (the first two authors) approached teach-
ers during team meetings and presented this study as an
opportunity to receive feedback and support with classroom
management. Interested teachers provided researchers with
a preferred mode of contact (e.g., email), and researchers
scheduled individual meetings to explain the scope of the
study and obtain written informed consent. Ultimately, five
female teachers volunteered to participate in this study.
Teacher 1 earned a BA, was certified in math (Grades
7–12), and taught eighth grade math. Teacher 2 earned her
BA, was certified in elementary education (K–8), and
taught fifth grade language arts. Teacher 3 earned a MA,
was certified in special education, and taught reading, lan-
guage arts, and math to fifth through eighth grade students
with disabilities who were also English language learners.
Teacher 4 completed her master’s degree plus 15 credits,
was certified in elementary education (K–8) and science
(Grades 4–8), and taught seventh grade science. Teacher 5
earned a MA, was certified in elementary education (K–8),
and taught fifth grade math. The teachers ranged in experi-
ence: At the time of the study, Teachers 1, 2, 3, 4, and 5 had
accrued 3, 2, 13, 28, and 4 years of teaching experience,
respectively.
Each participating teacher selected the class period in
which she experienced the greatest challenges with class-
room management, and she identified a 15-min segment of
Simonsen et al. 7
that period when she provided teacher-directed instruction
to serve as the focus of intervention and data collection.
Dependent Measures
Teachers’ use of specific praise was the primary dependent
variable in this study. In addition, we explored teachers’
fidelity of implementation and the social validity for each
self-monitoring strategy, as described in the next subsections.
Systematic direct observation (SDO). SDO data were col-
lected during 15-min observations of teacher-directed
instruction during the selected class period. Trained data
collectors recorded the frequency with which teachers
delivered specific praise by making a tally mark any time
the teacher provided audible, specific, and positive verbal
feedback to one or more students contingent on behavior
(e.g., “Thank you for raising your hand”). The frequency of
specific praise was converted to rate by dividing by number
of minutes observed.
Four data collectors (two PhD and two MA students)
were trained to collect SDO data across a series of training
activities. First, data collectors met with the lead researcher
(first author) to review operational definitions and proce-
dures for data collection. Then, the graduate student project
coordinator and lead data collector (second author) pro-
vided additional practice using video segments and in vivo
observations in each teacher’s classroom. Training activi-
ties continued until all data collectors met or exceeded 85%
interobserver agreement (IOA). In addition, retraining
meetings (to review operational definitions and discuss
areas of disagreement) were held in the event of decreases
in IOA (below criterion levels).
IOA was assessed during 40% of (100 of 249) sessions.
Agreement for teachers’ use of specific praise was calcu-
lated by dividing the smaller number of praise statements
recorded (agreements) by the larger number of praise state-
ments recorded by observers during the same 15-min obser-
vation (opportunities for agreement) and multiplying by
100%. IOA was acceptable across the study (M = 85.2%,
SD = 8.8%) and for each teacher (Teacher 1: M = 82.7%, SD
= 11.5%; Teacher 2: M = 85.5%, SD = 8.8%; Teacher 3: M
= 85.74%, SD = 7.7%; Teacher 4: M = 86.5%, SD = 5.5%;
Teacher 5: M = 86.2%, SD = 9.0%).
Indicators of implementation fidelity. To determine whether
teachers were self-monitoring with fidelity, we collected
data on both the adherence to and accuracy of teachers’ self-
monitoring across conditions. To assess adherence, trained
observers noted whether the teacher was implementing the
assigned self-monitoring strategy (fully, partially, or not at
all) or the incorrect strategy by checking the appropriate
box on the data sheet for each observation. Specifically,
observers recorded that the teacher was implementing (a)
fully if she consistently used the assigned self-monitoring
strategy throughout the observation, (b) partially if she
implemented the assigned strategy for part of the time or
implemented some (but not all) of the features of the strat-
egy, (c) not at all if she did not use any self-monitoring
strategy for any period of time, or (d) the incorrect strategy
if she implemented a different strategy than assigned for
that observation (e.g., tallied when she should have rated).
To assess accuracy, observers recorded the self-monitoring
data each teacher collected during the observation. That is,
observers noted the total number of praise statements
recorded by the teacher using the assigned self-monitoring
strategy (i.e., total tallies, total count, or rating) at the end of
each observation. We calculated agreement between the
teacher and observer by dividing the smaller number (agree-
ments) by the larger number (opportunities for agreement)
of praise statements recorded and multiplying by 100%.
Social validity measures. The first author adapted ques-
tions on the Intervention Rating Profile-15 (IRP-15; Mar-
tens, Witt, Elliott, & Darveaux, 1985) to collect descriptive
data on the acceptability of each self-monitoring strategy
from the teachers’ perspectives. The adapted IRP-15 con-
sisted of five questions that prompt responses on a 1
(strongly disagree) to 6 (strongly agree) scale, and a sixth
open-ended question prompts teachers to share any com-
ments or concerns. Scores on the IRP-15 have been found to
be reliable indicators of intervention acceptability (Martens
et al., 1985), and researchers have used it to assess the
acceptability of academic and behavioral interventions by
teachers (e.g., Reynolds & Kelley, 1997).
Design and Procedures
We used a modified alternating treatments design (e.g.,
Barlow & Hayes, 1979; Gast, 2010), with baseline, alter-
nating treatments, optimal treatment, and follow-up phases,
to explore the relative effectiveness of different self-monitoring
strategies on five teachers’ use of specific praise during
teacher-directed instruction. All five teachers progressed
through baseline, alternating treatments, and indicated
treatment phases. Teachers whose data were either stable or
demonstrated clear increasing or decreasing trends pro-
gressed to one of two possible follow-up phases: (a) main-
tenance (for teachers who demonstrated high stable levels
or increasing trends during the indicated treatment phase)
or (b) performance feedback (for teachers who demon-
strated low levels or decreasing trends during the indicated
treatment phase). Each phase is described in the following
sections.
Baseline phase. During baseline, we observed and
recorded each teacher’s rate of specific praise before any
training was delivered. After a stable pattern was docu-
mented (i.e., three or more consecutive data points with
minimal variability or a clear trend), each teacher received
a brief scripted training on how to provide specific praise.
The training comprised discussion (definition, rationale,
8 Journal of Positive Behavior Interventions 15(1)
examples, and critical features of specific praise), applica-
tion activity (scripting contextually appropriate specific
praise statements), introduction to self-monitoring (defini-
tion of self-management, explanation of three self-monitoring
strategies, and instructions on how to use each), and a brief
summary of study purpose. Fidelity of training was mea-
sured by an observer completing a rating of the extent to
which each component was delivered with no, partial, or full
fidelity during the training of each teacher. All training com-
ponents were delivered with 100% fidelity to all teachers.
Alternating treatments phase. Following the brief training,
the alternating treatments phase of the study began. During
this phase, the teacher’s behavior was observed during the
following four self-monitoring “treatments” or conditions.
1. Tally of specific praise statements. Teachers
were instructed to record a tally each time they
provided specific praise to one or more students
during teacher-directed instruction (i.e., during
observation). Most teachers used a Post-It note or
clipboard so that they could carry the tally sheet
around the classroom with them.
2. Count of specific praise statements (using coun-
ter). Teachers were instructed to press a button
to advance a small yellow golf counter each time
they provided specific praise to one or more stu-
dents during teacher-directed instruction.
3. Rating of specific praise statements (using brief
rating scale). Teachers were instructed to rate
their use of praise during teacher-directed instruc-
tion by estimating the number of specific praise
statements they provided per minute on a 0–4
times per minute scale.
4. Day off. To directly evaluate the effects of the
three conditions relative to the absence of self-
monitoring, teachers were also given “days off”
(i.e., no self-monitoring).
The tally and count conditions were designed to compare the
effects, fidelity, and social validity of two different methods
of frequency recording; the rate condition was designed as a
comparison condition that may require less effort than fre-
quency counting.
Teachers were also instructed to record their data daily.
For the tally and count conditions, teachers (a) recorded the
total number of specific praise statements, (b) recorded the
number of minutes of data collection (typically 15), (c) cal-
culated their rate of specific praise (number of specific
praise statements divided by number of minutes on sum-
mary sheet), and (d) graphed their specific praise rate on the
summary sheet we provided. For the rate condition, teach-
ers recorded and graphed their rating of specific praise on
the summary sheet we provided.
Each condition was implemented once a day, during the
same 15-min period of teacher-directed instruction observed
during baseline. Condition order was randomly scheduled
by drawing condition names without replacement, such that
each condition was implemented once every 4 days.
Condition order was communicated to teachers via a written
schedule, which was located on the summary sheet where
they recorded their self-monitoring data. If additional days
needed to be scheduled (e.g., if data were unstable and the
teacher needed to remain in the alternating treatments
phase), additional conditions were communicated in writing
via email and with an updated summary sheet. In addition,
teachers were offered an email reminder of condition order,
and all teachers were emailed if schedule changes (e.g., half
days, snow days) altered the planned schedule. Emails did
not contain any feedback on their use of specific praise.
Data collection continued until a stable pattern of behav-
ior and separation among the conditions was documented
for each teacher. In addition, observers collected data on the
fidelity with which each teacher implemented the selected
self-management strategy each day. If teachers did not
implement the correct strategy, they received a reminder
from the data collector either in person or via email about
condition order (with no performance feedback).
At the end of this phase, primary researchers (first two
authors) met with each teacher to (a) review the components
of the initial training (i.e., each component was quickly rein-
troduced and teachers were given the opportunity to ask
questions), (b) inform each teacher which condition was
considered optimal for her, and (c) give each teacher the
opportunity to complete the social validity questionnaires
(based on the IRP-15) for each self-monitoring strategy.
Optimal treatment phase. During this phase, each teacher
continued to implement the self-management strategy asso-
ciated with her best performance. The optimal self-monitoring
strategy was selected using one of the following decision
rules (in order of preference): the strategy associated with
the highest (a) level or increasing trend of specific praise
(visual analysis), (b) mean rate, (c) mean accuracy (agree-
ment between teacher and data collector), or (d) mean
adherence (rated by the data collector) during the observed
teacher-direction instruction activities. For example, if
visual analysis did not reveal a clear optimal strategy (deci-
sion rule “a”), then researchers selected the strategy with
the highest mean rate (decision rule “b”). During this phase,
observers continued to collect SDO data on teacher behav-
ior and record the fidelity (accuracy and adherence) with
which each teacher used the self-management strategy.
Teachers remained in the optimal treatment phase until a
stable pattern of responding or clear trend emerged. If
highly variable performance was observed, a teacher
remained in this phase through the end of the study.
Otherwise, a teacher was moved to a follow-up phase.
Simonsen et al. 9
Follow-up phases. Based on data, teachers were moved
into one of two follow-up phases: maintenance (weekly
data probes) or performance feedback (daily data updates
and suggestions for using specific praise). Specifically, if a
teacher demonstrated either a high stable level or clearly
increasing trend in specific praise rate, she was moved into
maintenance. That is, the lead data collector (second author)
informed her that her performance indicated that she was
ready to move into maintenance (weekly data probes). Dur-
ing each probe, the teacher self-monitored with the optimal
strategy and observers continued to collect SDO data.
If a teacher demonstrated either a low stable or clearly
decreasing trend in specific praise rate during the optimal
treatment phase, she was moved into performance feed-
back. That is, researchers met with her and provided verbal
and graphic performance feedback using a one-page sheet
that summarized the critical features of specific praise,
shared contextually appropriate examples of specific praise
statements for her classroom, and presented summary data
(bullet points summarizing means and a graph of specific
praise rates across conditions and phases). Throughout this
phase, observers continued to collect daily data, and the
lead data collector emailed the teacher an updated perfor-
mance feedback sheet, which included each additional day
of data, other examples, and summary statements, such as,
“You increased/decreased your rate of specific praise to X
today. Remember your goal is at least Y times per minute.”
Teachers were asked to respond via email that they had
received and reviewed the performance feedback sheet.
At the end of the study, researchers met with each teacher
to (a) provide feedback about their performance throughout
the study, which included a one-page data summary with
suggestions for on-going improvement in classroom man-
agement after the study, and (b) thank them for their partici-
pation with a $50 gift card.
Results
In the following sections, results are summarized for imple-
mentation fidelity, SDO of teachers’ specific praise rates,
and social validity.
Fidelity of Self-Monitoring
In general, teachers adhered to the self-monitoring condi-
tions across phases (Table 1). All teachers fully refrained
from self-monitoring during the “day off” (no intervention)
condition, and all teachers rated their specific praise during
the rating condition. The count and tally conditions required
a greater response effort throughout the 15-min observation
and were associated with higher variability across teachers
during the alternating treatments, optimal condition, and
follow-up phases. The accuracy of teachers’ self-monitoring
varied among the teachers and across conditions (Table 2).
SDO Data (Teacher Praise Rates)
SDO data for teachers’ specific praise rates were graphed
and analyzed visually within and across phases for each
teacher (Figure 1). In addition, means and standard devia-
tions were calculated for teachers’ specific praise rates
(Table 3). Given the concerns with these measures of cen-
tral tendency and spread for auto-correlated data (i.e.,
repeated measures), these data should be interpreted with
caution. Results are summarized for each teacher by phase.
Teacher 1. During baseline, Teacher 1 demonstrated low
and generally stable levels of specific praise. During the
alternating treatments phase, Teacher 1 demonstrated an
increase in level of specific praise across all conditions, and
the count condition was associated with the greatest level of
specific praise, with the exception of the final data point.
Table 1. Mean and Standard Deviation Rating of Adherence to Self-Monitoring (0 = Not at All, 1 = Partially, 2 = Fully) for Each Condition
(Alternating Conditions, Optimal Condition, and Follow-Up Phases) Across Teachers (1–5)
Alternating condition Optimal Follow-up
No intervention Count Tally Rate Count or
tallya
Maintenance or
feedbackb
M SD
M SD M SD M SD M SD M SD
1 2.00 0.00 2.00 0.00 1.80 0.45 2.00 0.00 2.00 0.00 2.00 0.00
2 2.00 0.00 2.00 0.00 1.75 0.50 2.00 0.00 1.95 0.22 —
3 2.00 0.00 1.60 0.55 1.67 0.82 2.00 0.00 1.94 0.25 1.75 0.71
4 2.00 0.00 2.00 0.00 1.83 0.41 2.00 0.00 2.00 0.00 1.29 0.95
5 2.00 0.00 1.50 0.84 1.20 0.84 2.00 0.00 2.00 0.00 2.00 0.00
a. Count was the optimal condition for Teachers 1, 2, and 5; tally was the optimal condition for Teachers 3 and 4.
b. Teacher 1 was moved into maintenance, and Teachers 3, 4, and 5 received performance feedback during follow-up phases.
10 Journal of Positive Behavior Interventions 15(1)
During the optimal treatment phase, Teacher 1 demon-
strated a clear increasing trend, resulting in a high level of
specific praise. Given the high level and increasing trend of
specific praise during the optimal treatment phase, Teacher
1 was moved into maintenance (weekly probes), and she
maintained a high and stable level of specific praise.
Teacher 2. During baseline, Teacher 2 demonstrated a
low and stable specific praise rate. During the alternating
treatments phase, Teacher 2 demonstrated variable specific
praise rates across conditions. Because count, tally, and rate
conditions were associated with similarly high levels of
specific praise, the count condition was selected as her opti-
mal condition as it was associated with the highest level of
accuracy. During the optimal treatment phase, Teacher 2
demonstrated highly variable specific praise rates, which
were generally higher than her rates in previous phases and
increased in trend throughout the phase. Because of high
variability, data collection continued in this phase, and she
was not moved to a follow-up phase.
Teacher 3. Teacher 3’s specific praise rates increased
throughout the baseline phase. Unlike other teachers, she
received training, but she asked not to begin the alternating
treatments phase until a later date (after statewide testing
concluded). Thus, she received training during baseline (as
indicated by the arrow in Figure 1). During the alternating
treatments phase, an immediate increase in level was
observed for all self-monitoring conditions. Throughout this
phase, specific praise rates were variable and generally
decreased in trend across all conditions. The tally condition
was associated with the highest average specific praise rate
and was selected as her optimal condition. During the opti-
mal treatment phase, Teacher 3 maintained a relatively high
specific praise rate. However, her data were variable and gen-
erally lower than data for the same condition (tally) during
the alternating treatments phase. Therefore, we provided per-
formance feedback during the follow-up phase. When
receiving performance feedback, Teacher 3’s specific praise
rates were more variable and slightly lower than in the opti-
mal treatment phase. In other words, daily performance feed-
back was not associated with greater improvements in praise
rate than self-monitoring with the optimal strategy (tally).
Teacher 4. Teacher 4 engaged in low and stable rates of
specific praise throughout the baseline phase. During the
alternating treatments phase, Teacher 4’s specific praise rates
remained relatively low with variability across conditions.
Tally and count conditions were both associated with similar
specific praise rates; therefore, tally was selected as the opti-
mal condition based on her accuracy. During the optimal
treatment phase, Teacher 4 demonstrated a slight increase in
specific praise rate, but her data were variable and demon-
strated a slight decreasing trend. As a result, performance
feedback was provided during the follow-up phase, and her
specific praise rates increased in level and trend.
Teacher 5. Teacher 5 provided low and stable rates of spe-
cific praise during baseline. Her specific praise rates were
highly variable, and overlap was noted among conditions
throughout the alternating treatments phase. Both tally and
count conditions were associated with the highest average
rate, and she implemented both with a similar level of accu-
racy. Therefore, the count strategy was selected because it
was associated with the highest level of adherence. During
the optimal treatment phase, Teacher 5 increased her average
specific praise rate, but her data were still variable and rela-
tively low in comparison with other teachers. Therefore, per-
formance feedback was provided during the follow-up
phase. The introduction of daily performance feedback was
associated with a slight increase in level and trend.
Social Validity
In general, teachers found self-monitoring strategies accept-
able (Figure 2). Relative to other strategies, teachers indicated
Table 2. Mean and Standard Deviation Accuracy of Self-Monitoring (Agreement Between Teacher and Observer) for Each Condition
(Alternating Conditions, Optimal Condition, and Follow-Up Phases) Across Teachers (1–5)
Alternating conditiona Optimal
Follow-up
Count Tally Rate Count or tallyb
Maintenance or
feedbackc
M SD M SD M SD M SD M SD
1 0.69 0.19 0.62 0.20 0.62 0.26 0.77 0.16 0.74 0.07
2 0.73 0.14 0.63 0.23 0.48 0.28 0.59 0.15 —
3 0.58 0.38 0.76 0.17 0.57 0.40 0.76 0.18 0.66 0.36
4 0.24 0.16 0.35 0.27 0.20 0.20 0.45 0.20 0.23 0.25
5 0.50 0.32 0.52 0.37 0.15 0.12 0.49 0.19 0.79 0.10
a. Accuracy data were not collected during the no intervention condition as teachers did not record data.
b. Count was the optimal condition for Teachers 1, 2, and 5; tally was the optimal condition for Teachers 3 and 4.
c. Teacher 1 was moved into maintenance, and Teachers 3, 4, and 5 received performance feedback during follow-up phases.
Simonsen et al. 11
that self-monitoring with the counter resulted in greater
decreases in students’ inappropriate behavior and greater
increases in students’ appropriate behavior, that it was easier
and less effortful, and that they were more likely to recom-
mend it to others.
Discussion
In this study, we examined the effects of three self-monitoring
strategies (tally, count, and rate) and no self-monitoring on
five middle school teachers’ use of specific praise during
Figure 1. Specific praise rate (per minute) across phases and conditions for Teachers 1–5
12 Journal of Positive Behavior Interventions 15(1)
teacher-directed instruction. In general, we found that (a)
teachers adhered to all self-monitoring conditions, but
recorded their praise rates with varying levels of accuracy
across conditions; (b) teachers’ specific praise rates were
higher during self-monitoring conditions than baseline or
the no self-monitoring condition, with either count or tally
considered optimal; and (c) teachers preferred the count
strategy. Therefore, self-monitoring may be a promising
strategy for increasing teachers’ use of specific praise. In
the following sections, we discuss study results, limitations,
and implications in more detail.
Discussion of Study Results
All teachers engaged in low and stable rates of specific
praise during baseline, and the introduction of the three
self-monitoring strategies during the alternating treatments
phase was associated with an increase in level, trend, or
both across teachers, with the exception of the rating strat-
egy for Teacher 3. Count and tally conditions were found to
be optimal because they were associated with the highest
levels of specific praise, accuracy of recording, or adher-
ence to the strategy for participating teachers. Both condi-
tions required teachers to note each time they delivered
specific praise and differed only in the mode of recording
(counter vs. paper and pencil). Teachers preferred the coun-
ter condition to the other self-monitoring strategies and
indicated that the counter was the easiest to implement,
resulted in the most desired outcomes, and required an
acceptable level of effort. For example, Teacher 1 com-
mented that the counter may have served as a prompt
because holding it “reminded [her] to make praise state-
ments,” whereas she “forgot to tally and praise.” Similarly,
Teacher 2 commented that the counter was “easier to have
with her for the whole 15 min,” unlike the tally sheet,
which she often left on her desk. In other words, teachers
considered the counter the most efficient and effective
strategy.
Following the alternating treatment phase, two addi-
tional phases were implemented: optimal treatment and
follow-up. Because neither phase was implemented in a
staggered fashion across teachers, experimental control was
not achieved and the following results are descriptive in
nature. During the optimal treatment phase, Teacher 1
clearly increased the level and trend of specific praise,
Teacher 2 increased the level of specific praise, but her per-
formance remained variable, and Teachers 3–5 engaged in
inconsistent levels of specific praise.
When Teacher 1 was moved into the maintenance phase,
she maintained her level of praise across three weekly
probes. She commented that she appreciated learning how
to effectively praise, and she used this skill throughout the
selected period and across her other classes. Because of
Table 3. Mean and Standard Deviation Rate of Specific Praise Statements per Minute for Each Condition (Baseline, Alternating
Conditions, Optimal Condition, and Follow-Up Phases) Across Teachers (1–5)
Baseline Alternating condition
Optimal Follow-up
No intervention No intervention Count Tally Rate
Count or
tallya
Maintenance or
feedbackb
M SD M SD M SD M SD M SD M SD M SD
1 0.15 0.21 0.53 0.20 1.11 0.46 0.55 0.32 0.75 0.34 1.23 0.58 1.51 0.38
2 0.02 0.04 0.19 0.11 0.69 0.28 0.62 0.19 0.67 0.54 1.08 0.59 —
3 0.62 0.36 1.00 0.83 1.34 0.57 1.53 0.91 0.61 0.54 1.07 0.35 0.79 0.49
4 0.01 0.03 0.20 0.19 0.32 0.20 0.32 0.19 0.23 0.21 0.38 0.20 0.74 0.60
5 0.15 0.04 0.18 0.18 0.40 0.26 0.47 0.23 0.31 0.18 0.52 0.15 0.66 0.35
a. Count was the optimal condition for Teachers 1, 2, and 5; tally was the optimal condition for Teachers 3 and 4.
b. Teacher 1 was moved into maintenance, and Teachers 3, 4, and 5 received performance feedback during follow-up phases.
Figure 2. Average ratings of acceptability on the Intervention
Rating Profile–15
Note. High scores are desired on Items 1, 2, 3, and 5; low scores are
desired on Question 4.
Simonsen et al. 13
high variability, Teacher 2 was not moved into a follow-up
phase. However, her average praise rate was higher during
the optimal treatment phase than any of the conditions dur-
ing the previous alternating treatments phase.
When performance feedback was introduced for Teachers
3–5, findings were inconsistent. Teacher 3 did not appear to
respond as her specific praise rate initially decreased and
then returned to previous levels. She commented that it was
difficult to tally on various days during this phase. Teacher 4
did not appear to respond initially, but her use of specific
praise increased toward the end of the phase. It is interesting
that this increase in praise corresponded to a decrease in
adherence to self-monitoring; therefore, another variable
(e.g., end of school year pressure to widely distribute school-
wide behavior coupons paired with praise) may better
explain changes in her specific praise rate. Teacher 5 gradu-
ally increased her specific praise during the performance
feedback, but all data points overlapped with those from the
previous phase. In sum, most teachers engaged in their opti-
mal specific praise rates during the optimal treatment phase,
and performance feedback did not result in substantial gains
over self-monitoring.
Therefore, self-monitoring appears to be an effective
tool to increase teachers’ use of specific praise. This finding
adds to existing literature demonstrating that self-monitoring
(Allinder et al., 2000; Browder et al., 1986) and self-evaluation
(Keller et al., 2005; Sutherland & Wehby, 2001) are associated
with increases in teachers’ use of evidence-based practices
(i.e., data-based decision making and specific praise,
respectively). Results of this study also suggest that teach-
ers’ use of simple and efficient self-monitoring strategies
employed while teaching may be effective and may reduce
the need for more time-intensive performance feedback and
self-management procedures, such as reviewing and evalu-
ating audio recordings of instruction.
Limitations
Study results should be viewed in light of the following
limitations related to the scope, context, measurement, and
design of this study. First, we explored the effects of vari-
ous self-monitoring strategies on five middle school teach-
ers’ use of specific praise. As these teachers volunteered to
participate in a study on classroom management, they may
have responded differently to self-monitoring and increas-
ing praise than other teachers. Generalizations of findings
to other populations of teachers and to other classroom
management practices (e.g., prompts) are premature. The
effects of self-monitoring on other populations of teachers
and other classroom management practices should be sys-
tematically studied.
Second, although we scheduled direct observations during
the times each teacher identified as teacher-directed instruc-
tion, variability existed among instructional conditions within
and across teachers. In addition to providing direct instruction,
teachers worked with individual students, facilitated indepen-
dent seatwork, and delivered other types of instruction. Also,
four of the classrooms were typical general education class-
rooms, and one classroom (Teacher 3) was a small-group spe-
cial education setting. This variability in instructional practices
and class composition may have influenced teachers’ specific
praise rates.
Third, observers may not have captured all instances of
specific praise delivered by teachers. Because instructional
conditions varied, observers may have had difficulty hear-
ing and recording teachers’ specific praise statements dur-
ing certain instructional conditions. For example, if a
teacher walked around the room and quietly provided feed-
back to students, observers may not have had heard all
instances of praise. Similarly, Teacher 3 delivered instruc-
tion in both English and Spanish. Although her style was to
repeat information in both languages, observers may have
missed some instances of praise in Spanish.
Finally, although the study design allowed direct com-
parison of self-monitoring strategies and no self-monitoring
during the alternating treatments phase, the introduction of
the optimal treatment and follow-up phases were not stag-
gered; therefore, a functional relationship between the opti-
mal strategy or follow-up (e.g., performance feedback) and
teacher behavior was not documented. In addition, research-
ers were directly involved in providing brief trainings
between phases and delivering feedback during the perfor-
mance feedback phase.
Implications
Although preliminary, the results of this study suggest that
simple and efficient self-monitoring strategies may be
related to increases in teachers’ use of specific praise. In
particular, recording each instance of specific praise using
either a counter or tally resulted in optimal praise rates for
all teachers, and all teachers preferred using the counter.
Therefore, school administrators and others involved in
supporting teachers may consider asking teachers to use a
simple self-monitoring strategy to record their use of spe-
cific practices, like praise, to increase their implementation
of that practice.
In addition, this study clearly highlights a need for addi-
tional research in the use of simple strategies to increase
teachers’ use of evidence-based classroom management
practices. First, researchers should use experimental designs
(e.g., multiple baseline, withdrawal, group experimental) to
continue to study the effects of self-monitoring on teachers’
use of classroom management skills like specific praise.
Second, if self-monitoring is functionally related to
increases in specific classroom management skills, research-
ers should explore the conditions under which self-monitoring
may be used. For example, it would be useful to examine (a)
how many behaviors teachers can effectively and effi-
ciently monitor at one time; (b) whether self-monitoring
14 Journal of Positive Behavior Interventions 15(1)
effectiveness is similar under other instructional contexts,
such as transitions, student-led activities, and teacher lec-
ture; (c) what “dose,” or length and intensity, of self-
monitoring is required to sustain the desired level of
teacher behavior; and (d) how to fade self-monitoring
while maintaining desired levels of teacher behavior.
Third, given the variability in teacher characteristics with
respect to years of experience, prior training, skill fluency, and
other characteristics, we would expect general strategies, like
self-monitoring, to be effective with some, but not all, teach-
ers. Therefore, future researchers should examine what addi-
tional supports might be needed if simple self-monitoring is
ineffective. Finally, researchers should explore the effective-
ness of self-monitoring under typical school conditions to
establish the ecological validity of this practice (e.g., Carr
et al., 2002). In this study, researchers provided training, feed-
back, and prompting. An important question is whether simi-
lar implementation fidelity can be achieved when support is
provided by school administrators, school psychologists, and
peer mentors under typical work conditions.
In sum, if teachers are to benefit from the use of effective
practices, they must be able to implement that practice with
fidelity. Performance feedback can enhance implementation
fidelity; however, obtaining useful and meaningful feedback
may be difficult when resources are limited. The findings
from this study suggest that self-monitoring may be strategy
for teachers to obtain information about their implementa-
tion in a relevant, efficient, and effective manner.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with
respect to the research, authorship, and/or publication of this
article.
Funding
The author(s) disclosed receipt of the following financial support
for the research, authorship, and/or publication of this article: The
development of this article was supported in part by Grant
H029D40055 from the Office of Special Education Programs
(OSEP), U.S. Department of Education for the OSEP Center on
Positive Behavioral Interventions and Supports (www.pbis.org).
Opinions expressed herein are the authors’ and do not necessarily
reflect the position of the U.S. Department of Education, and such
endorsements should not be inferred.
References
Abbott, R. D., O’Donnell, J., Hawkins, J. D., Hill, K. G.,
Kosterman, R., & Catalano, R. F. (1998). Changing teach-
ing practices to promote achievement and bonding to school.
American Journal of Orthopsychiatry, 68, 542–552.
Allen, C. T., & Forman, S. G. (1984). Efficacy of methods of
training teachers in behavior modification. School Psychology
Review, 13, 26–32.
Allinder, R., Bolling, R., Oats, R., & Gagnon, W. (2000). Effects of
teacher self-monitoring implementation of curriculum-based
measurement and mathematics computation achievement of
students with disabilities. Remedial and Special Education,
21, 219–226.
Barlow, D. H., & Hayes, S. C. (1979). Alternating treatments
design: One strategy for comparing the effects of two treat-
ments in a single subject. Journal of Applied Behavior Analy-
sis, 12, 199–210.
Begeny, J. C., & Martens, B. K. (2006). Assessing pre-service
teachers’ training in empirically-validated behavioral instruc-
tion practices. School Psychology Quarterly, 21, 262–285.
Brophy, J. (1981). Teacher praise: A functional analysis. Review of
Educational Research, 51, 5–32.
Browder, D., Liberty, K., Heller, M., & D’Huyvetters, K. K.
(1986). Self-management by teachers: Improving instructional
decision making. Professional School Psychology, 1, 165–175.
Caplin, D., & Creer, T. (2001). A self-management program for
adult asthma, Part III: Maintenance and relapse of skills. Jour-
nal of Asthma, 38, 343–356.
Carr, E. G., Dunlap, G., Horner, R. H., Koegel, R. L., Turnbull, A. P.,
Sailor, W., . . . Fox, L. (2002). Positive behavior support:
Evolution of an applied science. Journal of Positive Behavior
Interventions and Supports, 4, 4–16, 20.
Chalk, K., & Bizo, L. A. (2004). Specific praise improves on-task
behavior and numeracy enjoyment: A study of year four pupils
engaged in numeracy hour. Educational Psychology in Prac-
tice, 20, 335–351.
Cooper, J., Heron, T., & Heward, W. (2007). Applied behavior
analysis (2nd ed.). Upper Saddle River, NJ: Pearson.
Creer, T., Caplin, D., & Holroyd, K. (2005). A self-management
program for adult asthma, Part IV: Analysis of context and
patient behaviors. Journal of Asthma, 42, 455–462.
Creti, L., Libman, E., Bailes, S., & Fichten, C. (2005). Effectiveness
of cognitive-behavioral insomnia treatment in a community
sample of older individuals: More questions than conclu-
sions. Journal of Clinical Psychology in Medical Settings, 12,
153–164.
Donaldson, J., & Normand, M. (2009). Using goal setting, self-
monitoring, and feedback to increase calorie expenditure in
obese adults. Behavioral Interventions, 24, 73–83.
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., &
Wallace, F. (2005). Implementation research: A synthesis of
the literature (FMHI Publication #231). Tampa: University of
South Florida, Louis de la Parte Florida Mental Health Insti-
tute, National Implementation Research Network.
Gast, D. L. (Ed.). (2010). Single subject research methodology in
behavioral sciences. New York, NY: Routledge.
Jeffrey, J. L., McCurdy, B. L., Ewing, S., & Polis, D. (2009). Class-
wide PBIS for students with EBD: Initial evaluation of an integ-
rity tool. Education and Treatment of Children, 32, 537–550.
Keller, C. L., Brady, M. P., & Taylor, R. L. (2005). Using self-
evaluation to improve student teacher interns’ use of specific
Simonsen et al. 15
praise. Education and Training in Developmental Disabilities,
40, 368–376.
Martens, B. K., Witt, J. C., Elliott, S. M., & Darveaux, D. X. (1985).
Teacher judgments concerning the acceptability of school-
based interventions. Professional Psychology: Research and
Practice, 16, 191–198.
Ngamvitroj, A., & Kang, D. (2007). Effects of self-efficacy,
social support and knowledge on adherence to PEFR self-
monitoring among adults with asthma: A prospective repeated
measures study. International Journal of Nursing Studies, 44,
882–892.
Noell, G. H., Witt, J. C., Gilbertson, D. N., Rainer, S. D., &
Freeland, J. T. (1997). Increasing teacher intervention imple-
mentation in general education settings through consultation
and performance feedback. School Psychology Quarterly, 12,
77–88.
Noell, G. H., Witt, J. C., Slider, N. J., Connell, J. E., Gatti, S. L.,
Williams, K. L., . . . Duhon, G. J. (2005). Treatment imple-
mentation following behavioral consultation in schools: A
comparison of three follow-up strategies. School Psychology
Review, 34, 87–106.
Reynolds, L. K., & Kelley, M. L. (1997). The efficacy of a
response cost-based treatment package for managing aggres-
sive behavior in preschoolers. Behavior Modification, 21,
216–230.
Rokke, P., Tomhave, J., & Jocic, Z. (2000). Self-management
therapy and educational group therapy for depressed elders.
Cognitive Therapy and Research, 24, 99–119.
Simonsen, B., Fairbanks, S., Briesch, A., Myers, D., & Sugai, G.
(2008). A review of evidence based practices in classroom
management: Considerations for research to practice. Educa-
tion and Treatment of Children, 31, 351–380.
Simonsen, B., Myers, D., & DeLuca, C. (2010). Providing teach-
ers with training and performance feedback to increase use of
three classroom management skills: Prompts, opportunities
to respond, and reinforcement. Teacher Education in Special
Education, 33, 300–318. doi:10.1177/0888406409359905
Skinner, B. F. (1953). Science and human behavior. New York,
NY: MacMillan.
Sutherland, K. S., & Wehby, J. H. (2001). The effect of self-
evaluation on teaching behavior in classrooms for students
with emotional and behavioral disorders. Journal of Special
Education, 35, 161–171.
Sutherland, K. S., Wehby, J. H., & Copeland, S. R. (2000). Effects
of varying rates of behavior-specific praise on the on-task
behavior of students with EBD. Journal of Emotional and
Behavioral Disorders, 8, 2–8.
Workman, E. A., Watson, P. J., & Helton, G. B. (1982). Teachers’ self-
monitoring of praise vs. praise instructions: Effects on teachers’
and students’ behavior. Psychological Reports, 50, 559–565.
byAnqi Zheng
Submission dat e :
1
3
– Mar-
2
019 12:38PM (UT C+1100)
Submission ID: 10923
6
5
5
4
6
File name : 2328228_Anqi_Z heng_EBP_2110903_1884 3357 81.do cx
Word count : 3650
Charact e r count : 20395
1
2
3
Awk.
4
5
6
Awk.
FINAL GRADE
7/20
EBP
GRADEMARK REPORT
GENERAL COMMENTS
Instructor
Erica,
Your assignme nt has be e n awarde d t he mark of 7 /20 , which t ranslat e s t o a
Fail.
Your Evide nce – Base d Pract ice Asse ssme nt has be e n marke d using t he
crit e ria se t out in t he course out line . Ple ase re ad and t ake not e of t he
spe cif ic comme nt s include d wit hin t he body of your work, and t he f ollowing,
more ge ne ral, comme nt s base d on t he assignme nt crit e ria.
An unde rst anding of t he t ask and it s re lat ionship t o t he t he ory, re se arch and
pract ice of classroom manage me nt , was not we ll de monst rat e d.
T he art icle s you chose f or t he asse ssme nt we re not appropriat e , as t he y
we re not art icle s de scribing re se arch st udie s t hat support t he use of t he
e vide nce base d pract ice s on t he list provide d. T his made it ne arly impossible
f or you t o addre ss t he prompt s in t he mat rix corre ct ly and e f f e ct ive ly.
T he submission was above t he 3,0 0 0 word le ngt h re quire me nt by 10 %.
Your use of ke y t e rms and conce pt s lacke d clarit y and accuracy at t ime s.
Your unde rst anding of ke y classroom manage me nt principle s and issue s was
not cle arly e vide nt .
You subst ant iat e d t he use of e vide nce base d pract ice s in part t wo of t he
asse ssme nt , and but f aile d t o provide an e xplanat ion of how t e ache rs could
de t e rmine if a pract ice had an e vide nce base t o support it s use . You also re –
summarise d some of t he mat e rial f rom part 1, which was inappropriat e .
Your re sponse was support e d by a range of pe e r re vie we d lit e rat ure .
Your assignme nt was ade quat e ly st ruct ure d, but t he product lacke d bot h
clarit y and cohe re nce . T his was cause d by e rrors in se nt e nce st ruct ure ,
vocabulary, spe lling and punct uat ion.
As pe r t he S cho o l o f Educat io n asse ssm e nt po licy, yo u m ay re subm it t his asse ssm e nt
f o r a m ark no gre at e r t han a 10 /20 . Yo u will subm it t his re subm issio n t o t he
re subm issio n bo x o n t he co urse ’s Mo o dle sit e . Yo u have t wo we e ks t o subm it , so yo u
m ay want t o m ake an appo int m e nt t o se e m e t o go o ve r yo ur asse ssm e nt .
QM
PAGE 1
PAGE 2
Comment 1
T his article describes mo re than o ne EBP, so yo ur answers to the pro mpts f o r this EBP lack clarity and co herence.
PAGE 3
Comment 2
T his is no t an appro priate article f o r this assessment as it do es no t describe a study used to pro vide evidence to suppo rt the
use o f the EBP
PAGE 4
Comment 3
to allo w the student to do what?
PAGE 5
Awk.
Awkward:
T he expressio n o r co nstructio n is cumberso me o r dif f icult to read. Co nsider rewriting.
PAGE 6
PAGE 7
Comment 4
QM
Interventio n in Scho o l and Clinic is a practitio ner – f o cused jo urnal. It seeks to span the research to practice gap by pro viding
implementatio n advice to teachers. It do es no t disseminate research studies, so is no t appro priate f o r this assessment.
PAGE 8
PAGE 9
Comment 5
Again, this article is no t appro priate f o r this assessment.
PAGE 10
PAGE 11
PAGE 12
PAGE 13
Comment 6
T his who le paragraph do esn’t address the pro mpt.
PAGE 14
Awk.
Awkward:
T he expressio n o r co nstructio n is cumberso me o r dif f icult to read. Co nsider rewriting.
PAGE 15
PAGE 16
PAGE 17
- EBP
by Anqi Zheng
EBP
GRADEMARK REPORT
FINAL GRADE
GENERAL COMMENTS
Instructor