Thursday, February 28, 2019

PBL Help! My Students Don't Want to do a PBL!

by Kate Wolfe Maxlow

You would think that students would be thrilled at the idea of hands-on learning, right? Well, sometimes, not so much. Why?

There's a few possible reasons.

Reason #1: It's not easy

Learning by doing projects is engaging. That means it requires students to actually, well, engage with the learning. Engagement requires thinking. It requires work. If students have been spoon-fed their entire academic career, the idea of having to go above and beyond to think critically and creatively might seem arduous at first.

Potential Solutions: Let students know ahead of time that they're going to be doing a PBL. Mention it frequently and explain what it is and what it will look like so they get their angst out early and move past it by the time you do the actual project. Assure them that you'll be there to scaffold it. You can even do an activity where you have them individually write down all their worries or oppositional thoughts, submit anonymously and then compassionately address the various concerns.


Reason #2: They hate group work

A lot of our top students don't actually enjoy group work because they're used to getting top grades AND being expected to pull along their less academically successful peers. They interpret a project as, "Here is a giant project that you are going to have to do all by yourself, except it will take even longer because you're going to have to redo other peoples' work and manage them when they don't pull their weight."

Potential Solutions: One option is to ensure that grades are a combination of individual and group work grades. Some teachers allow students to "grade" one another's contributions (the teacher has final say over the individual grades that students receive). On the front end, assign roles that evenly distribute the work (for instance, the timekeeper often doesn't need to do as much as, say, the facilitator).

On the back end, technology can also be a great help here; using collaborative programs like Google Slides or Google Docs allows you to see exactly how many times various students accessed the documents AND what they contributed.


Reason #3: They don't care about the project topic

You excitedly tell students that they are going to create a PSA about why cyber bullying is wrong. They sigh and agree to do it but are obviously unenthusiastic.

Potential Solutions: This is where student Voice & Choice can be your best friend. One of the temptations in order to manage a project is to simply tell students what they are going to do. This makes it easier to help them complete their project and to grade it...but it can also be de-motivating if the students have no interest. One way to potentially combat this is to ask more questions rather than giving the solutions. For instance: "What are the impacts of cyber bullying? How has it impacted you personally? What do you think we should do about it?" So that the project doesn't go completely into left-field, you can still give students a rubric with the content that you want them to address...but letting them have some voice and choice in how they complete it is one way to help draw them in.

Another big way to combat the apathy is authenticity. Authenticity means that the project is either directly connected to students' current lives OR has them working in a way in which people in careers work. The cyber bullying project above is a perfect example of a project that is authentic to students' lives. A project that is authentic to a career might be something like having students design a new community center or create a menu for a new restaurant.

Lastly, consider a high engagement entry event. For instance, when doing a project on combatting pollution, have students start by taking a field trip to a common area, such as a beach, and actually clean up the trash. Then tell them, "Okay, how are we going to keep this from happening again?"


Reason #4: Students don't understand how to do projects

Sometimes students are slow to warm to the idea of project-based learning simply because they don't know what is expected or how to even start. I remember when I was in high school physics and I had to create a vessel that could keep a raw egg from breaking when dropped from the top of the bleachers. Quite frankly, I had no idea how to do this. I put it off and put it off because I didn't even know what step one should be (this was in the days before the internet where you could just look up a plan to save your poor defenseless egg). My egg contraption was made the weekend before it was due and I'm sorry to say that the egg, that did nothing wrong, did not survive.

Potential Solutions: Break up the project for students. In the egg drop project, it would have been nice if the teacher had specifically chunked the PBL. Maybe we could have started simply with some whole group brainstorming, then moved to small group researching. The teacher could have checked in with all of us to see what we were planning and provide feedback. Perhaps our initial designs would have been turned in and we could other groups critical feedback. We would have done a test run and used the results to make improvements. Then, I might have actually learned something rather than simply wasting a perfectly good breakfast item.


In Short

What are the overall lessons? Teachers need to scaffold the PBL by providing structures for it and chunking it into manageable pieces. The teacher should check in frequently with students and make sure everyone is pulling his or her weight. The project needs to include students' voice and choice and be authentic to their lives or future careers. Lastly, the more than students do projects, the better they'll be at doing projects, and the more excited they'll be about them, too.






Kate Wolfe Maxlow is the Director of Innovation and Professional Development for Hampton City Schools. You can follow her on Twitter @LearningKate or on Linked In or email her at kmaxlow@hampton.k12.va.us.

Tuesday, February 26, 2019

Teacher, where do assessments come from? The Origins of an Assessment

by Kate Wolfe Maxlow

When an assessment is born, there are several different places it can come from, and there are pros and cons to each.

International Assessments


  • Examples: Trends in International Mathematics and Science Study (TIMSS), Programme for International Student Assessment (PISA), International Baccalaureate (IB)
  • Pros: These tests can let us know how students across the world compare. They are also highly standardized and generally provide good reliability (consistency) and valid inferences about student understanding.
  • Cons: They may not fully match the students' actual (often state-set) curriculum, which can often say more about the curriculum than the particular instruction or student ability. The results are often very general (questions are not usually released), so there is a limited amount of information a specific teacher can gather from results.

National Assessments

  • Examples: National Assessment of Educational Progress (NEAP) for Reading and Mathematics; Measure of Academic Progress, SAT/ACT, competency exams
  • Pros: These tests compare students across the country. They provide statistical norms and can be measures of the curriculum as well as the instruction and student ability.
  • Cons: Because these tests often require a large number of assessments to be graded, they tend to use more select-response style assessments. The results are often very general (questions are not usually released), so there is a limited amount of information a specific teacher can gather from results.

State Assessments

  • Examples: Standards of Learning Tests, Common Core Assessments
  • Pros: These tests are usually highly aligned to the state curriculum. Like national tests, they provide statistical norms and can be measures of the curriculum as well as the instruction and student ability.
  • Cons: Similar to national assessments, there's a strong preference toward select-response style assessments. They usually occur more toward the end of the school year, and are therefore more like an autopsy of what was learned rather than a diagnostic check-up.  These are usually created by organizations with assessment departments, and therefore often have a high degree of validity and reliability.

District/Division Assessments

  • Examples: Benchmark assessments, critical skills assessments, district/division performance assessments
  • Pros: These can be specifically aligned to district curriculum and occur whenever the district sets them. Therefore, they don't have to be given at the end of the year. There's an option to have teachers grade their own, which means that the assessments can be more open-ended (like performance assessments)
  • Cons: The district may or may not have the ability to run statistical analyses on the validity and reliability of the assessments, and those writing them may or may not have training in assessment writing. This can lead to less reliability or valid inferences of student knowledge. Unlike state assessments where there are often specific directions on how to give the assessment, validity may be compromised by how teachers implement the assessments (e.g., some teachers may allow their students to go back and check their work, whereas others do not).

Classroom Assessments

  • Examples: Teacher created tests, quizzes, or other classroom activities
  • Pros: These assessments can provide the most information to teachers. They also provide the most flexibility. They can be designed to target specific knowledge or skills.
  • Cons: Teachers have to create or design them themselves. The level of reliability (consistency of results) and validity (do the results actually help us make valid inferences about what students know and are able to do?) may be impacted if the teacher has never had specific training on how to write assessments.

In short...there's a time and a place for each type of assessment, and knowing the strengths and limitations of each can help us make better decisions about how to use each.






Kate Wolfe Maxlow is the Director of Innovation and Professional Development for Hampton City Schools. You can follow her on Twitter @LearningKate or on Linked In or email her at kmaxlow@hampton.k12.va.us.

The Main Types of Assessment & How to Balance Them

by Kate Wolfe Maxlow

There are two main types of assessments: select response and constructed (or supply) response. Performance assessments are a special type of constructed response. The type of assessment that you choose will depend on your standards or competencies and what students are supposed to know, understand, and be able to do.

The goal in education should be to use a Balanced Assessment system...to use the appropriate type of assessment for the appropriate level of learning.

Select-Response

Select response items are common in today's standardized assessment world. The teacher provides the student with various answers, and the student selects one. True/False, matching, and multiple-choice are examples of select response questions.

Constructed-Response

Unlike a select response question in which the teacher provides the answers and the students choose the correct one(s), in a constructed response, students must create their own responses. Examples of this can include fill in the blank, diagrams, short answers, essays, or performance assessments.

Performance Assessments

Performance assessments are a special kind of constructed response because they require students to actually USE what they have learned in a practical or authentic way. When we say "authentic," we mean that it can be authentic to students' current lives, to potential future careers, or to the discipline itself. When we have students create series/parallel circuits, take a patient's pulse, or make change using physical money, those are all examples of (albeit less complicated) performance assessments. Performance assessments can range in length, intensity, and instructor intervention.

How do we choose which type of assessment or assessment item to use?

There's no hard and fast rule for how we choose the type of assessment. Generally, select response items cover the Understanding through Application level (with some multiple-choice being able to meet the Analyze level). Simple constructed response (such as fill in the blank) can also be lower-level, but the beauty of constructed response is that they can go all the way up to the Create level. Therefore, it's important to unpack standards using a taxonomy such as Bloom's cognitive domain or Webb's Depth of Knowledge in order to determine the appropriate rigor of the standard.

We also have to consider how long the various types of assessments take. Students can complete a single multiple-choice item, on average, in about 30-60 seconds. An essay, on the other hand, can take upwards of an hour. Performance assessments, especially if they become complex projects, can take days or weeks.

Moreover, grading time for the various types of assessments is different. With a select response or a lower-level constructed-response item, a teacher doesn't have to spend a long time grading. With a longer constructed response, especially a performance assessment, the teacher may end up spending considerable time grading using a lengthy checklist or rubric.

For these reasons, it's usually best to use the following rules of thumb:
  • To check factual knowledge or simple application of skills (for instance, whether students can add/subtract with regrouping), use select response or lower-level constructed response.
  • To check in-depth understanding or ability to use skills in real-life contexts, use constructed response. Especially when it comes to using skills in real-life settings, performance assessments are preferred.
  • Strive for a balance between select response, constructed response, and performance assessments.





Kate Wolfe Maxlow is the Director of Innovation and Professional Development for Hampton City Schools. You can follow her on Twitter @LearningKate or on Linked In or email her at kmaxlow@hampton.k12.va.us.

PBL Starter Guide: The Essential Question vs. the Driving Question

by Kate Wolfe Maxlow

The Essential Question and the Driving Question serve important, but very different, roles in Project-Based Learning.

Let's talk about Essential Questions. 

Essential Questions are big, open-ended questions that have no right or wrong answers. They're meant to be discussed again and again, in multiple contexts, and spur inquiry and justification. They can be asked in various grade levels and content areas, and they are immediately intriguing and spark debate. Essential Questions look like this:

  1. How do we know what is true?
  2. What does it mean to be free?
  3. Who should lead?
Look at question #1. This question could asked in multiple content areas and grade levels. You could ask this of a 5 year old or a 95 year old, and you would get vastly different answers. If you ask it in a science class, you might use it to discuss the scientific method, theories, and hypotheses. If you use it The Great Gatsby, you'll relate to the reliability of a narrator. If you're in a history class, it's going to spark a question on primary sources.

You can see how these questions can be instantly engaging for students. Almost every student can provide a basic answer at first, and the depth of that answer will deepen as he or she continues to explore the question and look at it from new perspectives.

So what's a driving question?

A driving question is more specific than an Essential Question. It comes from the Essential Question, but provides a direction, or a reason, for exploring the Essential Question. In the context of PBL, it usually lets students know what it is that we're trying to better understand or solve.

Let's go back to that first Essential Question: How do we know what is true?

We'll imagine that we're exploring this question as a part of a Government/English/Library Media collaboration project. The Essential Question can be asked in each classroom and explored from multiple viewpoints, but it's the Driving Question that gives us the PBL.

In this case, the Driving Question might look like something: How can we create a resource to help people better identify "fake news?"

The trick to the Driving Question is that it is at once specific and yet open-ended. The resource created isn't named, so students have some options (teachers can also provide students with a list of potential options, but the emphasis should be more on understanding or solving the problem than on finding "the" correct answer). We could create a video, a website, a tool...sky's the limit.

Does this mean we leave it completely open-ended? Nope. The change here is that we move from expecting kids to create a specific "thing" to instead meeting certain expectations. We do this through the use of rubrics that are shared ahead of time. For instance, students might know that, among other things, however they answer the driving question needs to meet the following criteria:
  • Easily used by anyone with a Grade 5 or higher reading level
  • Provides an annotated list of links to reputable online sources
  • Creates a strong list of "look fors" to identify "fake news"
  • Provides an example of a recent "fake news" story, including how and why the story was spread

When we provide students with criteria like this, we give them space to think creatively and think big, while also ensuring that they are incorporating various learning objectives and meeting standards or expectations.

In Conclusion

By marrying the Essential Question with a strong Driving Question, we help students better explore real world scenarios, problems, and questions. We give them freedom to think outside the box and create real, important products and performances that can make a difference in their...and our...worlds.



Most Popular Conversations