Does No One Else Read YA Dystopian Fiction?

Broadband Matters, an awareness initiative of the Iowa Communications Network, tweeted a link to Connecting the Classroom with the Internet of Things earlier today.

While the tweet mentions a fairly innocuous sounding method of taking attendance (though see RFID chip tracking of students), the smart band described in the article “uses ECG patterns to authenticate identity.” There is also a suggestion that teachers could use EEG technology to measure student brain activity to decide which students need more attention. Or teachers could use technology to “map[] the record of behavioral incidents against a student’s heart rate.” Because apparently adults cannot be expected to effectively teach children without the benefit of monitoring and analyzing their heart rates and brain activity, at least not in the 21st century.

ATF: State Board Member Miller Weighs In

State Board of Education member Mary Ellen Miller, who participated in the task force presentations to the Senate and House Education Committees earlier this month, has a guest column advocating the adoption of the Smarter Balanced assessments in The Gazette. The guest column is a response to the Gazette staff editorial SMARTER BALANCED: Recommended assessment for Iowa’s K-12 students carries a hefty price tag, and for what?

Miller describes the staff editorial as second guessing the task force recommendations, as if that is a bad or impermissible for the Gazette staff to do. The task force undertook our work at the request of the Iowa Legislature for the express purpose of making policy recommendations about statewide assessments. Anyone interested in our work, including staff at The Gazette (not that they need my permission or approval), can and should poke, prod, examine, question, and even second-guess our recommendations and process.

Miller also describes the staff editorial as misinformed. That seems like a pretty strong word for what appeared to me to be a pretty well-informed take on the assessment issue. Might I suggest “written from a different perspective than mine” or “written with different priorities in mind than mine”?

Miller describes the task force as having “spent more than a year studying options for an assessment system.” I think it would be more accurate to describe the task force as having studied summative (end of year) assessment options, one of which also included interim assessments and a digital library.

Miller describes the Smarter Balanced assessments more than an annual test but “a system of quick, informal tests–some lasting only a few minutes”, an “approach to assessment [that] doesn’t take time away from instruction, as The Gazette suggests.” I can only guess that Miller is referring to the formative assessments that are part of the Smarter Balanced digital library or the interim assessment blocks. However, summative Smarter Balanced assessments and the full-length interim assessments do take more time to administer than either the current Iowa Assessments or the proposed Next Generation Iowa Assessments and do not include required science assessments. It’s hard to see how these lengthier assessments (plus the additional, required science assessments) don’t take more time away from instruction unless 1) the Legislature adds time to the school year or 2) the kids take them during lunch or recess time.

Miller than states “[s]upporters of the status quo will use misleading cost estimates or technology concerns to argue against the Smarter Balanced assessments.”

I have addressed the difficulty of crafting a fair, apples to apples comparison of costs for Smarter Balanced assessments and the Next Generation Iowa Assessments in the previous blog post. I think it is more helpful to be upfront about assumptions used to create the cost estimates than to accuse others of being misleading. In any case, the task force was charged with recommending a summative assessment,* so I don’t think it is inherently misleading to focus on the summative assessment cost estimates only and to exclude the other assessment spending by districts. It is up to legislators to decide whether to choose an assessment or an assessment system.

As for the technology concerns, the most recent task force documents show a bandwidth survey of schools has been done but not a computer hardware survey. Readers may be interested in the UEN technology directors memo to the task force on statewide assessment technology costs and support.

*See the text of 256.7(21)(b)(2) and (3).

ATF: Apples to Apples

On Wednesday, members of the Assessment Task Force appeared in front of the House Education Committee for a question and answer session follow up to the task force presentation the week before. Unfortunately, I was unable to attend.

However, in preparation for this appearance, the task force created an additional document comparing assessment options and I do have a copy of this to share. Find the first page (overview) here and the remaining pages (comparison table) here.

I want to focus my comments here on the portion of the table comparing the costs of the first year of an assessment system based on either the NGIA or the Smarter Balanced assessments as the summative (end of year/accountability) assessment.

This table illustrates that the Smarter Balanced assessments systems of assessments are less expensive than NGIA-based assessment system, even though the Smarter Balanced assessments have a higher per student cost than the NGIA.

I think the following unstated assumptions are baked into those numbers:

  1. Iowa’s vendor will match Connecticut’s vendor’s pricing for vendor services.
  2. Technology costs are zero (either no district or state entity will require technology upgrades to administer statewide online assessments or it is a district decision to upgrade or purchase additional technology not properly attributable to the decision to choose an assessment requiring online administration).
  3. Costs of science assessments (summative/accountability plus multiple measure) are zero.
  4. Districts will either stop administering all of those other multiple measures assessments that they have selected for district use or the costs of those assessments shouldn’t be charged to the “assessment system” if they don’t.

This table I think hints at the difficulty we have had creating a completely fair, apples to apples cost comparison of the two assessments. One requires technology, one doesn’t. One has an additional cost for a paper and pencil option (but perhaps not many schools will use it?), one doesn’t. One has a science assessment, one doesn’t. One has a suite of assessment options, one doesn’t. The predecessor to one has additional reporting costs paid by the state which are not included in the per student cost, one doesn’t.

Should we be comparing costs of summative assessments only or assessment systems and why? Which costs should be counted and why? Should technology costs have been included or is it just something districts should be doing anyway?

ATF: Legislative Questions

I blogged previously about questions legislators asked during the Assessment Task Force presentations to the Senate and House education committees. Prior to the education committee presentations, legislators submitted the twenty-six written questions about the Assessment Task Force recommendations. The questions–along with the task force answers–are available here.

Under question one (about technology needs) the task force response notes:

Technical requirements for the Measures of Academic Progress (MAP) testing are similar to those for the Smarter Balanced assessments. Many schools in Iowa with years of experience using this platform to assess students (over 60% of districts sampled on a recent survey on assessment practices indicated they used the MAP assessments). We can learn from them.

The task force references MAP testing for a few different purposes in this document (technology, cost comparison). I should note that the task force as a whole group never reviewed the MAP tests–or other assessments used for multiple measures purposes–or compared them to SBAC interim assessments, though undoubtedly many of the other task force members are familiar with these assessments through their work as educators.

Part of the technical and logistical challenge of the Smarter Balanced assessments is the length of the assessments–needing to schedule all grade three through eleven students for 7 to 8.5 hours of testing within the fixed testing window. The MAP assessments are much shorter, running 20-30 minutes for a survey test or 60-75 minutes for a goals survey test (per subject area).

For question two (in school bandwidth issues) the task force suggests a paper/pencil option or busing students to another location as options for schools with insufficient bandwidth. It should be noted that the paper/pencil format will cost Washington schools an additional six dollars per subject test or twelve dollars total per student. As for busing, it adds to the loss of instructional time and would have to be done for multiple testing sessions as the Smarter Balanced assessments are much too long for students to take in one testing session.

Under question six (about possible savings if the full suite of SBAC assessments are adopted) the task force provides costs for MAP tests. Districts can use MAP tests to meet the multiple measures requirements for math and reading ($12.50 per student) plus science (an additional $2.50) per student. Which means the assessment requirements could be met for a district for $30 per student (NGIA at $15 plus MAP with science for $15). To match that cost with SBAC, school districts would need to obtain two science assessments for a total of $2.70 per student, which frankly seems unlikely.

Under question nine (about measuring growth for students at very low and very high levels of performance) the task force notes that while NGIA will reportedly be grade level items only,:

The Smarter Balanced Assessments utilize an adaptive testing model that measures on a broad continuum of skills. The scaling allows interpretation of growth against that broad continuum. Thus, a student performing at a very low level will receive test items measuring their proximal level of development. As the student improves over time, more difficult items will be delivered, with all results on the same scale, allowing interpretation of student growth. A student performing well above grade level will have a similar experience, but with items more in line with his/her advanced skills.

This task force understanding of the promise of computer-adaptive testing for the lowest and highest performing students–which I shared–seems to be somewhat at odds with this explanation provided in a new white paper commissioned by SBAC, “Making Good Use of New Assessments: Interpreting and Using Scores From the Smarter Balanced Assessments Consortium“, dated March 2015:

Although Smarter Balanced uses computer adaptive testing, and although test scores will be arrayed on a vertical scale, the tests will not measure the full continuum of achievement along which students actually fall. As noted above, Federal rules require that students be tested with items associated with grade level content and skills and most students will encounter only those items. Based on their performance on the bulk of the test, the CAT will direct some students to items just above or below grade level. However, the test will not measure the actual skill levels of students whose achievement is far above or far below their grade levels, since they will not encounter many items that accurately measure their actual levels of achievement. Thus, while the test will accurately describe the student’s knowledge of grade level and near–grade level content, the test will not be as sensitive a measure of growth for these students.

As an example, an 8th grade math teacher whose students are functioning with 4th grade skills will have to teach requisite computational skills while also seeking to teach algebraic concepts. Given the design parameters associated with Smarter Balanced assessments, gains that students make in these foundational computational skills will not be specifically measured on the tests, because students will not encounter items that would evaluate their growth of these specific skills. The growth metric that could be produced from Smarter Balanced tests will only provide information about the extent to which students’ scores changed within the region of their grade level, not a full measure of their starting and ending skills at two points in time.

In other words, the Smarter Balanced assessments may not be as different from the Iowa Assessments with regard to above and below grade level items as we–or at least I–thought they might be.

Under question ten, the task force declines to choose between adequate funding for foundation support or the assessments. As the sole dissenter, I will speak for myself and say that this isn’t even a close question for me. Adequate funding for instructional programming is a higher priority than funding assessments that are more expensive and time consuming than necessary to meet assessment requirements.

Under question fourteen, the task force suggests that the additional hours spent an SBAC assessments would have an insignificant impact on instructional time. I’ll note just a few things here: 1) that no one is talking about adding 1.5 minutes of testing per day, 2) that either the full-length interim assessment or practice test is likely to be given so that students have an opportunity to become familiar with the test format and software plus the performance task items–so you can at least double that extra four hours, and 3) students would still need to take two additional science assessments of unknown duration.

Under question twenty-six, the task force does not provide information on the experience other states have had with the roll-out of new exams. If you are a regular reader, you will know that states already using statewide online assessments are having an easier time rolling out the new assessments than states that aren’t, that states and districts have been spending a lot of money on technology to prepare for online assessments, and that even so they are encountering a variety of technology issues large and small.

This isn’t necessarily an argument against adopting a statewide online assessment but it is an argument that a move to statewide online assessments should be done with our eyes open, a comprehensive statewide technology plan aimed at ensuring all schools can build and maintain adequate technology infrastructure for equitable testing experiences for all students, and adequate funding to pay for it all without instructional cuts.

Smarter Balanced I’m Ready

The Senate Education Committee and the House Education Committee were (mostly) all about assessments this afternoon.

Luci Willits, Deputy Executive Director of SBAC and Catherine Welch, Director of the ITP’s statewide testing programs were on the agenda for the Senate Education Committee and Assessment Task Force members Jo Ellen Latham, Mark Lane, and Jon McKenzie were on the agenda for the House Education Committee.

The second funnel deadline is one week from Friday (bills must be passed in one chamber and out of committee in the other chamber) so if either or both chambers intend to take action on assessments during this session, we should be seeing amendments being filed and HF 446 and/or SF 429* on tentative debate calendars very soon. Representative Forristall asked about the effect of a delay on choosing a new statewide assessments last week, so there is some possibility that action will be delayed until the 2016 session.

While we wait, here’s a video shared by @smarterbalanced this afternoon:

*HT: Shane Vander Hart at Iowa RestorEd, who noticed what I managed not to notice in all of the times I have looked for amendments or other action on SF 429: the language from SSB 1239 adopting the Smarter Balanced assessments was removed from this bill, which would now simply strike the new assessment requirements currently set to take effect for the 2016-2017 school year and the language creating the Assessment Task Force.

ATF: Education Committee Presentations

The big news Wednesday was the announcement that DE Director Brad Buck will be the next superintendent of the Cedar Rapids Community School District.

Much less newsworthy on Wednesday, apparently, was the Assessment Task Force report presentations to both the Senate Education Committee and the House Education Committee (in separate meetings) by State Board of Education member Mary Ellen Miller, and task force members Tammy Wawro (president of ISEA) and Jane Lindaman (superintendent of Waterloo Community School District).

Ten of twenty-one task force members attended the presentations, as did three DE employees, who supported the work of the task force, and a woman, unknown to me and not introduced or identified to me, who was obviously coaching the presenters. The words of the day were “research-based”, used in reference to the task force process, and “piecemeal”, used to describe the assessment system in use by Iowa’s schools. The basic narrative of the presentation is summed up in the final slide:

Slide19

The presenters really had four major issues to deal with: whether the task force followed the legislative charge, cost, technology readiness, and time needed to administer the Smarter Balanced assessments, and legislators asked questions about all four issues.

The legislative charge issue was dealt with by pointing to the criteria of the screening rubric plus showing how the first two recommendations cover all three subject areas required to be tested.

The cost issue was dealt with in a couple of ways. The first argument was that the costs will be equal to what we spend now if we buy the full Smarter Balanced assessments suite of assessments and districts stop using other assessments that they have chosen to meet multiple measures requirements or for other district purposes (compare SBAC assessment system to piecemeal assessment (non)system to compare apples to apples costs). [Note: this argument skirts the need for a science assessment for accountability purposes plus a second science assessment for multiple measures purposes (remember SBAC does not offer science assessments) and the cost of technology required to administer the Smarter Balanced assessments.] The second argument was that we are wasting money on assessments that provide NO useful information at all, so better to spend it on the Smarter Balanced assessments instead which will provide great information for parents and for improving instruction. [More on this issue later in a separate post, for now I’ll note that Miller left the distinct impression that she is unaware that the Iowa Assessments are administered to meet the assessment reporting requirements of NCLB and that other assessments are administered to meet Chapter 12 multiple measures requirements, an impression that I hope is incorrect.]

The technology issue was dealt with with assurances that ninety-nine percent of districts have enough bandwidth.

The time issue was dealt with by explaining that we get NO useful information now, so the additional time is worth it to get actually useful information for parents and for improving instruction.

Next up: some of my notes on the question and answer portions of the presentations. Most questions and answers are paraphrased as the conversation was moving too fast for me to record exact quotes for the most part. Both meetings ended without all legislators being able to ask their questions due to scheduling constraints.

Sen. Quirmbach (D-Story) asked about concerns he’s heard about security of test data (I think, particularly concerns that identifiable student data will be shared with the feds). Wawro: outside the scope of our charge. He also asked whether it was a close contest or a clear winner. Lindaman: clear winner.

Sen. Johnson (R-Osceola) commented that there might have been better geographic balance on the task force (Western Iowa is different) and noted that costs weren’t clearly covered in the report. Wawro: not our job to look at costs, just provide information, also argued that costs are only for deciding between two otherwise equal options, which was not the case here. He also commented that he is hearing from classroom teachers that the information provided by Smarter Balanced assessments isn’t helpful.

Sen. Bowman (D-Jackson) questioned the quality of the Smarter Balanced assessments. He had two legislative pages take the Smarter Balanced assessments [this elicited laughter] and reported both had technology issues and some of the questions were confusing. He also cited Steven Rasmussen’s critique of Smarter Balanced assessments mathematics items (car question) and said that as a classroom teacher “I’ve seen our technology really let us down.” Lindaman: the point of a pilot is to find problems and improve, problems with bad questions on Iowa Assessments/ITBS for years.

Sen. Hart (D-Cllinton) observed that not every child does well on standardized tests and that while kids might respond better to the computer-based test, it comes with challenges: time and costs. Lindaman: it could replace other tests. She said time is one of the biggest problems, spending too much time testing. Lose instructional time and may be causing test anxiety. Lindaman: too much testing happens because districts are trying to piecemeal together an assessment system.

Sen. Dvorsky (D-Johnson) led with an observation that “we sell things around here by saying we put a lot of work into this even if it’s wrong.” Just because the task force worked hard doesn’t mean the recommendations will be followed. ICCSD reported that the Smarter Balanced assessments take too much computer time and are not getting good results. The actual numbers on cost in the report are not good. [Missing some comments in my notes here.] Wawro: we looked at computer-adaptive assessments. “Did we ask you to do that?” Wawro: it describes student achievement  [one of the legislative requirements]. Was ITP asked to provide a self-leveling test or is it just the task force interpretation? Wawro: we understand this is a recommendation and not the decision, the legislature makes the decision. Lindaman: reflects a philosophical difference between the vendors. “You didn’t really answer my question. You’re saying all other testing costs will go away?” Wawro: that’s up to individual districts. “Costs being equal then goes out the window.”

Sen. Zaun (R-Polk) asked about the point of all students getting different questions and how students could be expected to pay attention for 8.5 hours. He also asked whether all schools have sufficient technology to administer the test and where the $22.50 cost came from.

Sen. Kraayenbrink (R-Webster) made comments that perhaps the task force was charged with the wrong task and that Iowa should be developing our own assessments.

Rep. Salmon (R-Black Hawk) asked about costs. Lindaman: look at costs for all assessments given throughout the school year, not just summative. Lindaman: we want a system of assessments to predict performance on the summative assessments because “we want our scores to be better, we want our scores to improve.” [Note: I doubt Lindaman meant it quite like it sounds, but it struck me as sad that the focus in this formulation is on higher test scores rather than more learning or more curious and independent students or anything else you think we might want to improve about the school experience for kids.]

Rep. Winckler (D-Scott) mentioned a huge concern about the demand for technology that hasn’t been thoroughly answered. She was not impressed with the professional development, especially the self-paced modules. She asked what information the teachers receive. She stated that ongoing assessment is a local control issue, she’s just interested in the one time standardized assessment.

Rep. Mascher (D-Johnson) asked where is it coming from that SBAC is all we’d need? What about DIBELS (and other assessments I didn’t catch). We don’t want just one, we want multiple assessments throughout the year. Lindaman: other districts may continue to give other assessments. Wawro: districts will still have to do FAST and some other assessments. She mentioned horror stories about the SBAC pilot including computer problems and predicted they will occur again. She asked if we have the funding to do it adequately. She also noted the amount of time eats into student learning; the important thing is the student learning, not the assessing. She also noted the amount of time computers would be tied up in school.

Rep. Jorgensen (R-Woodbury) asked them to explain the time differences between the tests. [Note: this sounded to me like it was meant to be a softball question.] Lindaman: agrees with Rep. Mascher but learning and assessing go hand in hand, can’t be separated. Lindaman: the times are averages, an untimed test administered in chunks (not all in one sitting). Talk about the performance tasks. Lindaman: they show what students know. Lindaman: we heard technology concerns but we didn’t hear horror stories, but remember that pilots are about finding out what works and what doesn’t. Rep. Mascher: those were my words, it was a horror story.

Rep. Steckman: explain the timing, how it would work for a classroom. [Not clearly answered.] Lindaman: testing for testing sake is a mistake. She asks again, how does it work to give untimed tests to say thirty kids in one classroom? [Again, not clearly answered.] Lindaman: the time is worth it. She then asks about the self-paced professional development modules. Colleen Anderson, DE staff: describes the SBAC digital library in some detail (she is part of the group putting it together. [Note: no one mentions the digital library already paid for and part of the Iowa Core website.]

Rep. Forristall (R-Pottawattamie) asked if it would be harmful to push off this decision a year. Miller: if it gets to the right decision and the time is well spent but does leave less time for schools to get ready. Wawro: the delay is a problem because current assessment is not aligned and is already being used to judge schools (Attendance Center Rankings website).