New Microsoft Whiteboard Features

The new whiteboard interface, with the create panel activated, showing a flow chart, a picture, some drawing on the board, and some text.

Microsoft Whiteboard — the drawing app that you may have used in the past via the Share option in a Teams meeting — has received multiple improvements over the past few months. The interface has been completed updated with a number of new features and additional templates. Click this link to learn more about all of these new features.

What’s new today in our 365 tenant is a different way to access whiteboard files that offers some interesting new functionality.

To start the process, visit https://whiteboard.microsoft.com. You’ll see all of the different whiteboards you’ve created via the Teams meeting interface. Every time you launch a whiteboard it gets saved in this space so you can access it later (Microsoft Forms does something similar when you create a Poll in a Teams meeting, or create a math quiz using the Math Tools in OneNote).

New Whiteboards that you create are now also automatically being saved in a folder named Whiteboards in your HWDSB OneDrive (whiteboards made previously are still available at whiteboard.microsoft.com, but are not accessible from OneDrive, and can’t take advantage of the new features I’ll detail below).

An animated gif showing how a whiteboard is now saved in OneDrive in a folder called Whiteboards.

This new storage location provides a variety of interesting opportunities. Now you can use the ability to attach a file from your OneDrive to an Assignment in Teams, and provide a copy of a Whiteboard to all of your students, providing a creative means of sharing their learning beyond Word Documents or PowerPoint slideshows.

An animated gif showing how to add a whiteboard to your assignment in Teams.

With that in mind, we can start to play with the idea of building out templates in Whiteboard that the students manipulate and submit. In the below example, I’ve clipped some images of some Base Ten blocks using Snip and Sketch, added them to a whiteboard using the Image option in the sidebar, then copied and (furiously) pasted (CTRL-V) a pile of copies of the image for the students to drag across the page.

An animated gif showing how to copy an image added to a whiteboard, paste it multiple times, and lasso the stack to an appropriate location on the canvas.
An image showing how to adjust the z-axis of objects added to the whiteboard, by clicking the three dot content menu and selecting Bring to Front or Send to Back.

Once that was done, I added some text using the Text tool, dropped in an array of sticky notes using the Notes tool (click the context (three dots) menu on items to send them to the back or bring them forward so the items are sitting correctly on the Z axis) et voila, a whiteboard template that I can share with my students.

An animated gif of the finalized base 10 whiteboard, showing a mouse cursor dragging a 10 block over to the top of a sticky note.

The next step will be to sort out how we can share these with each other. On the surface there doesn’t seem to be an easy way to allow others to make a copy of a template I’ve created in OneDrive; but it can be done. You need to create a folder to house the whiteboards (this means moving them from that Whiteboards folder they save to by default to another location you are willing to share from) and then share that folder with everyone (instead of trying to just share the file: the file will inherit the permissions of the folder). *Be careful when sharing folders with everyone: you don’t want to inadvertently share something that should be kept private.

You should be able to click on this link to find a Folder I’m sharing with everyone: Whiteboard Templates (logged in HWDSB users only) and then by clicking the three dots menu beside the file name, use the Copy to action to grab your own copy of the above template.

Choice

I’ve been attempting to support some educators in creating Choice boards now that we are back delivering content online. These gained prominence when we first shifted to remote learning, popularized by the Bitmoji Classroom images that were used to create the backdrop for a series of different links to various activities. They are basically mini websites. The edtech community that has been sharing these have primarily been using Google Slides, but as a board shifting to a Microsoft toolkit, we’ve been struggling a bit with some of the limitations of OneDrive for Business, particularly when it comes to embedding slideshows in other spaces.

A Momentary Digression

There are some limitations to embedding content from OneDrive for Business that makes this difficult when you are building out materials. This is what the process of grabbing an embed code in OneDrive for Business looks like:

Here’s what it looks like in the commercial version of OneDrive:

Did you spot the difference? That middle window that prompts you to create a sharable version of the slide show is the magic sauce that would allow users to share their creations with others, and post them in various Teams and Virtual Learning Environments, without worrying about permissions issues.

As it stands now, the only way to manage the permissions on that OneDrive for Business file is to share it with the specific users who you feel should be able to access it. Maybe this makes sense in business (although if your prompt on the embed panel is going to say “Share on blogs or Websites” one would assume it would be configured to work within those spaces, where the audience is an unknown, anonymous entity); but in the classroom, if I build something there’s a good chance I’m going to reuse it with a different set of students; and there’s a good chance I’m going to share it with my educator colleagues so that they can use it as well. I need those creations to be accessible to anonymous users.

Sharing, Sharing, Sharing

Despite the proliferation of costly edtech tools and the nonsense of Teachers Pay Teachers, I still hold firm to the idea that Teachers, like Open Source software developers, need to find ways to freely share with one another. Having strong division team partners who share readily with each other is the backbone of a well-functioning teaching team, and that spirit should extend out to the larger teaching community. There are some exciting examples of this happening right now in post-secondary education in our own backyard. The eCampus Ontario Open Educational Resources library is an incredible example of disruption to the education resource industry.

Internationally, the Association for Learning Technology has created an Open COVID Pledge for Education:

Open COVID pledge for Education badge.

We pledge to make our intellectual property openly and freely available to the world to support educators, students and decision-makers, to help educational organisations survive and thrive, and to build a fairer and more resilient education system.

We pledge – where possible – to openly license or dedicate to the public domain our intellectual property

https://www.alt.ac.uk/about-alt/what-we-do/open-covid-pledge-education

Can we do the same in small ways here? Can we pledge to make our content accessible to others, and share between colleagues? Maybe that means publishing your content from Brightspace into the Learning Object Repository, or publishing it on https://share.school for others to access. Posting content to Twitter or on a blog can help your colleagues access and benefit from your work: and a community of sharing helps everyone.

One of the ways to do this is to build out resources in open tools that were build with sharing in mind. H5P is one of those tools: an interactive learning object builder that offers many of the same web 2.0 functions that require payment, or lock your content away behind proprietary barriers in other online tools.

Let’s Build One

Image of the WordPress dashboard, with the H5P menu enabled.

H5P is a plugin for WordPress (among other integrations). In my case I’m using the integration in the HWDSB Commons to create this. I’m going to use the Course Presentation format to create the presentation, in a similar fashion to how one would use Google Slides or Microsoft PowerPoint.

The slide background and add a slide buttons selected in the Interactive Presentation H5P editor.

I can add slides at the bottom, and add slide backgrounds if I have imagery from an existing presentation that I want to use. Just remember that if you convert a slideshow to images so you can upload them here, rather than using the text tools built into the tool, that you are creating inaccessible content.

The majority of these choice boards usually consist of anchors taking users from one slide to another, and links within the slides taking users out to various resources. Both options are there, but beyond that there are a huge number of other interactives that you can build into your presentation.

  • Fill in the Blanks
  • Single Choice Set
  • Multiple choice
  • True/False Questions
  • Drag and Drop Activities
  • Mark the Words
  • Dialog/Flash Cards
  • Interactive Videos

Why this and not just a slide show

A close up of the Reuse and Embed options at the bottom of an H5P interactive.

The ability to Reuse means your colleagues can grab the file you created and remix it for their own uses.

The ability to Embed means you can take this object and add it directly to your content in the Virtual Learning Environment (the HUB at HWDSB), or you can add it as a Tab in a Microsoft Team.

Because it’s HTML5, and it was built to be accessed on multiple different devices, all the headaches that come with trying to create PDFs or PowerPoint presentations that open in a viewer that respects their links are gone.

Image of Microsoft Teams with an H5P presentation set up in a Tab

From choiceboards, the sky is the limit to what you might be able to build.

Would that be a fair assessment?

On assessment: measure what you value instead of valuing only what you can measure.

Andy Hargreaves

The Case Against Total Points and Category Weights

Traditional digital gradebook applications limit what we can measure. Built around the concept of totaling up points from various assignments — whether as a single fraction, or as differently weighted categories — each assignment is counted as a fraction of the whole. Depending on your age this may be how you were assessed as a student from K-20. Assessing students using this method is problematic for a number of reasons:

  • Everything Counts​
    • Difficult to consider more recent, consistent evidence. ​
    • Students start the course with a perfect score, and every assessment is an opportunity to lose marks​
    • Students can’t outrun early failure even with recent success​
  • Differentiation is Difficult
    • Assignments are weighted equally for the entire class, rather than using determination to identify which assignments provided each individual student with the best opportunity for them to demonstrate mastery
  • Students are assessed based on an assignment’s numeric grade, rather than assignments being an opportunity for the students to demonstrate levels of mastery of Overall Expectations
    • No assessment is perfect, but when we provide a single numeric grade on a complex assignment that covers a variety of expectations, we are assessing the student’s ability to complete that project (eg. make a poster, write an essay, construct a model)
    • Providing a level of achievement on a single expectation, or a cluster of expectations, allows that project to be a vehicle for the student to demonstrate mastery of the curriculum expectations

Grading in this fashion stands in contradiction to many of the foundational concepts presented in Growing Success, and yet it’s the standard functionality behind many current grading applications:


What about the achievement chart categories; shouldn’t we use these as category weights?

The document is intended to ensure that policy is clear, consistent, and well aligned across panels and across school boards and schools, and that every student in the system benefits from the same high-quality process for assessing, evaluating, and reporting achievement.

Growing Success, Page 2

Despite the fact that Growing Success was written in an attempt to provide more consistency in grading practices, both between different subjects, across different classrooms, and within different school boards, the language utilized to describe the process of assessment is open to interpretation.

On page 16 of Growing Success we find a list describing how the achievement chart within each curriculum document should be used:

The purposes of the achievement chart are to:

  • provide a common framework that encompasses all curriculum expectations for all subjects/courses across grades;
  • guide the development of high-quality assessment tasks and tools (including rubrics);
  • help teachers to plan instruction for learning;
  • provide a basis for consistent and meaningful feedback to students in relation to provincial content and performance standards;
  • establish categories and criteria with which to assess and evaluate students’ learning.

That last bullet makes it seem as if the four sections of the achievement chart could be used as categories like what we see in a traditional grading program; but note that those programs treat categories as separate and discreet, while further down we are reminded that those categories “should be considered as interrelated, reflecting the wholeness and interconnectedness of learning”. This statement should be seen as an indictment against separating these four categories out in a gradebook as separate weighted categories.

The Achievement Chart was not meant to be the sole means by which we recorded student performance: students are evaluated and assessed based on the overall expectations contained in each discipline’s curriculum document. The achievement chart lists the skills that a student uses to demonstrate their mastery of those overall expectations.

Educators can use the achievement chart to ensure they are designing rich, authentic tasks that offer opportunities for students to demonstrate their learning in diverse ways, but their final grade should not be primarily calculated based on their ability to know, inquire, communicate, and apply. For this reason, a student’s achievement should be calculated based on the Overall Expectations, and not on their mastery of the four quadrants of the achievement chart.

Overall Expectation Based Grading

Growing Success indicates that evaluation focuses on a student’s achievement of the overall expectations, and rather than calculating all of the evidence provided by a student, should focus on their more recent and consistent achievement. Creative and judicious differentiation in instruction and assessment for all students indicates that perhaps different evidence should be utilized not only for each overall expectation, but for each individual student.

We see this call for judicious differentiation echoed in the shared beliefs from Learning for All. Not only does “each student (have) their own unique pattern of learning”, but we must recognize that “fairness is not sameness”:

Triangles

With that in mind, our gradebook practices need to be flexible enough to allow us to individualize what data we will be using to calculate a grade for each Overall Expectation. We know from Growing Success that a variety of data should be utilized in the determination of a report card grade, and that we should value not only products, but observations and conversations as well.

We refer to this as triangulation of assessment data. Oftentimes this concept is represented as an equilateral triangle, inferring that we should attempt to strike a balance between the three types of assessment data. This is a worthwhile conversation point if we are attempting to move our practice from one that values products over all else, but doesn’t go far enough to honour those unique patterns of learning we acknowledge from Learning for All.

If we recognize that some students will fare better in conversation than when faced with creating a product, than where possible, we should visualize each student as a unique triangle, where different values are applied to different assessment data based on a student’s unique strengths.

This requires something different than a calculator. Ultimately Growing Success calls on educators to determine a grade, rather than merely calculate one. Calculation is far easier. Setting all the necessary presets into a gradebook application and pointing at the resulting total is a far less complex process than what Ontario’s assessment policy asks of educators.

A number line depicting calculation and determination, with a map marker closer to determination.

How we are using PowerTeacher Pro to align to Growing Success

The authors of Growing Success didn’t write their document with gradebook applications in mind. Interpreting how a pedagogical philosophy can be made tangible and applicable in real-world contexts is difficult work. The following configurations help to set the stage for educators to apply these assessment best-practices into their work; and provide multiple opportunities for the teacher to exercise “judicious differentiation and creativity”.

For each overall expectation we default to calculating only the most recent scores. This ensures that when an overall expectation is assessed multiple times we capture the student’s growth, rather than penalizing them for early failure. This number can be manipulated once data is entered on a student by student basis.

For each Assignment, educator’s anticipate which Overall Expectations the students may exhibit throughout the process. There is no harm in selecting too many expectations. If a student doesn’t demonstrate mastery in a giving expectation, it can be left blank without penalty.

Students are given levels of achievement (1, 2, 3, 4) — based on the achievement chart — for each of the overall expectations they have demonstrated. These levels on each assignment will ultimately be utilized in the calculation of a level of achievement on each of the Overall Expectations.

Diving deeper into Student Progress

These assessments are tabulated on the Overall Expectations (Standards) screen. When the level of achievement in an expectation is not consistent, the Professional Judgement Indicator (orange globe) will appear, indicating that further investigation may be necessary. Clicking the graph button in the Grade Inspector allows the educator to investigate further, and augment the grade as appropriate.

In the below example, on the last two assessments the student achieved a level 4, so the educator may augmenting the mark to no longer be impacted by the initial Level 1 the student achieved on an earlier assessment:

An image of the PJI indicator on the Grading, Standards screen.

To move past the small drawer shown above, educators can visit the Overall Expectations Progress screen either by clicking the “Show More…” button in the drawer, or by navigating to Students/Overall Expectations (Standards) Progress. From this screen they can more fully manipulate the data. In the below example, I am exempting an assignment I feel should not be part of this individual student’s calculation:

If instead, I would like to examine how the student is progressing through all the expectations within the strand, I can do that by clicking the gear, and selecting Progress Options. In the following image I’m comparing the achievement within the Oral Communication strand in an English course:

In the Overall Expectations Progress screen, I can see the number of occasions I’ve collected assessment data for each of the Overall Expectations, and can delve deeper to identify those students who are still struggling with a concept I have covered. If an educator has covered an expectation multiple times and numerous students are still struggling, this can provide additional opportunity for professional reflection.

At two points through the year, teachers will determine an appropriate grade to appear on the report card, by assessing the levels calculated on overall expectations and how they are being used in the service of a strand or subject grade.

The shift described above, to providing levels on expectations, versus providing numeric grades on assignments, does needs to cascade into how we communicate progress to students. Continuing to provide a more traditional grade on assignments while recording those items differently in a gradebook can lead to communication issues. The fact that we have a policy that espouses levels from an achievement chart, while then switching out those levels for letter grades and the problematic percentage scale on the report card is already inherently confusing. Perhaps Growing Success 2.0 will address that discrepancy. Until then, this policy that many still grapple with, and one that 11 years after its publication we continue to argue over, contains some incredibly forward-thinking ideas about assessment and evaluation. Providing the proper tools at a district level to help operationalize those philosophies is a powerful step.

A Meek Monopoly

Two different articles in my news feed collided recently.

One, by Rick Wormeli, focused on how we mistakenly subject students to educational practices from academic stages in their future, in the belief that by introducing practices from the next level, we are preparing students to succeed.

The second article was about wireless headphones.

Thinking that we have to enact the policies and practices of the grade levels above us in order to prepare our students for those levels is deeply flawed.

http://www.amle.org/BrowsebyTopic/WhatsNew/WNDet/TabId/270/ArtMID/888/ArticleID/772/We-Have-to-Prepare-Students-for-the-Next-Level-Dont-We.aspx

Wormeli’s article, which you should read, makes the argument that we should prepare students for the next level in their academic journey by meeting their immediate developmental needs, rather than attempting to prepare them for an environment they aren’t mature enough to handle yet. “Remaining indifferent to developmentally appropriate learning experiences and simply demanding post-certification adult level performance is a quick way to boost the dropout rate.”

The idea of developmentally appropriate classrooms isn’t new. Montessori spoke about prepared environments that attended to the needs of students as they progressed through four planes of development. Piaget theorized four stages of cognitive development, and Vygotsky examined the failures that result when we challenge students with tasks beyond their zone of proximal development.

The 4U Quandary

There’s a “4U Quandary”. When working in secondary classrooms, assessment practices that work in lower grades suddenly become a challenge in 4U (the grade 12 courses used in calculating post-secondary entrance averages), where students are competing for spots in post secondary institutions. Teachers see themselves needing to kowtow to the next academic stage, adopting university assessment and evaluation practices, despite the fact that it is secondary school aged students being taught.

Growing Success requires a percentage grade on a report card at two points throughout the year. In earlier grades we can confidently relegate these calculations to the back seat in deference to the exploration of grade-less classrooms, single point rubrics, levels and descriptive feedback; but over and over I hear about the struggle to implement this in 4U. Between marks-hungry students, and anxious, calculator toting parents, we revert to total points calculations that hearken back to a time when we were in school, valuing the precision of a calculation over the more holistic determination used in previous years to professionally judge and assign a numeric value to a student’s learning. I’m not suggesting this idea prevails in all 4U corners. I know many educators who have found solutions to the quandry, but I’m also not denying this pressure, and the very real challenge of assigning a grade that will have such a powerful sway on a student’s future. It’s called a quandary for a reason. It becomes more difficult to justify determination over calculation when a single percentage might impact program opportunities. We are complicit in a system that believes a 100 point scale is a reliable measure (maybe it’s not).

Grading using total points infers that the student is entering the class with a perfect grade, and will spend the rest of the semester battling to ensure they don’t lose any of those points. Shouldn’t the opposite be true? Shouldn’t every day be another opportunity to gather more evidence about what a student knows, rather than checking what they don’t?

What does this have to do with headphones?

The second article I read talked about how Apple — as the monopoly player in the phone market — triggered sweeping changes in the third-party accessory market, by deciding to remove the 3.5 mm headphone jack from their devices. This decision resulted in exponential innovation in the bluetooth headphone market.

In Ontario education, the public, K-12 education system is the monopoly player. We are Apple. Post-secondary institutions who depend on our product (students) are the equivalent to the headphone manufacturers — the third party accessory creators. They are only one of a number of different viable pathways students can take after secondary school, and need to compete in a market that is beginning to question the value of their current product.

If the public system decided tomorrow that they were no longer going to rank students using percentages on a report card, the post-secondary institutions would have to shift to accommodate that change. They would need to value portfolios, or interviews, or other mechanisms to select students for programming. Yet instead of making the rules, we let the third party market determine how we conduct ourselves in previous rungs of the ladder. We operate a meek monopoly, feeling the tension that the post secondary market inflicts on our practice, yet somehow powerless to intervene (we may also be unfairly judging the post secondary admission system using our own 20 year old experience in that arena). We innovate in pockets, but still ultimately need to play by their rules: the same rules we’ve been using for years to rank students by applying percentages to a report card. With our most mature students, who are developmentally at a stage where they are capable partners in experimental pedagogical practices, we feel a requirement to stifle innovation in deference to their future academic destination.

Growing Success will be ten years old next year. Perhaps it’s time to push the concepts further in the Assessment and Evaluation sections, to re-write how we report student learning in the Reporting section. Then we can stop gate-keeping for post-secondary and focus on learning. We should probably find a way to do that under the current policy, rather than waiting for a change that may not come. If we meet out students current developmental needs, won’t the rest sort itself out?

Carry that Weight

I’m in a new role this year at HWDSB, supporting the roll out of a new grade book application. One of the discussion items that has come up over and over, is how strands and overall expectations should be weighed. In our previous grade book application, teachers had the ability to set a weight on both strands and expectations at a class level. In the new application, the weighting of strands is currently confined to the course level (2020 editors note: this is fixed now).

2019-03-07_21-11-34.png

This change has led us to set the strands to be equally weighted, not because we believe that this is the proper weighting in all contexts, but to ensure we don’t force teachers into using a weight they don’t agree with, and to avoid what I’ve been referring to as “nefarious multipliers” lurking in the background augmenting the grade provided to a student in unknown ways. I believe the thinking behind this configuration was to provide an equitable setting across districts: ENG1D1 can be weighted consistently at different schools around the board in order for the resulting final marks to be “fair” for all of the students. This kind of consistency isn’t new thinking. Some departments attempt to provision this consistency at a school level, agreeing upon how a course will be weighted despite different teachers covering the course. Some departments have historically come together to come to consensus on the weight of strands, with varied results.

I think there are pitfalls to attempting to configure a tool to ensure that two teachers in different schools across the district will assess in similar ways (two teachers across the hall will similarly struggle). This assumes that there aren’t myriad other factors that may come into play. How the teacher covers a given Overall Expectation is probably a much larger variable, yet we aren’t arguing for consistent lesson delivery plans at a district level. Anyone who has participated in the moderation of student work will attest to the ways in which our biases will always make grading a mix of determination and calculation. We may consistently hit broad levels of achievement when looking at student work (two teachers may be able to consistently identify level 4 work), but ultimately the chances that two teachers would come up with a matching percentage grade is much less likely.

Should we consider the idea of setting district weights on courses? Departments across the board will struggle to agree, and the sheer number of courses we offer in Ontario make the upkeep on something like this incredibly daunting. Does one stop at the strand? Are there Overall Expectations that should be weighted more heavily within each strand? Are the weightings set at a Curriculum Document level, or do the weights vary from course to course within a curriculum?

I will say that the task offers some rich opportunities for pedagogical dialogue. Our Physical Education departments posits that Active Living skills should be weighed more heavily than Healthy Living skills. Does the curriculum support this view? Does the Modern Languages curriculum espouse Speaking and Listening over Writing? Should the Arts place more emphasis on creation, rather than analysis? In History, should my ability to understand how to analyse causes and consequences, and historical significance, outweigh the importance of knowing who Arch Duke Ferdinand was?

I think we will find that equal weighting on strands doesn’t align with every curriculum document, and that there is merit to putting a differential weight on a strand in some circumstances; but I don’t support setting this weight at a district level. Not only because I believe this has to happen at a teacher/class level, rather than set universally on a course (whether at a school or across a district), but also because I believe this kind of weighting should happen at the end of the course, rather than at the beginning, if it needs to happen at all.

I believe teachers need to be able to augment weighting from one semester to another:

  • when five snow days affect the ability to cover a strand as fulsomely as they normally might;
  • when current events guide our inquiry towards focusing on one section of the curriculum over another;
  • when teachers accommodate the direction that the students have led the learning in a particular direction during a particular course;

along with a host of other factors I’m not listing here.

Waiting on the Weight

How can I pre-calculate the weight of a course for students I haven’t met yet if I am running a student-centred program that attends to the strengths, needs, and interests of the students I have in front of me? As a professional, my program is constantly in flux as I explore different ways to engage the students who are in my class right now, and as I learn from past delivery mistakes. When I take advantage of current events and authentic opportunities to bring life to my subject matter, pre-determining becomes problematic. If my program is constantly changing, can the weight on the strands within that program remain static? How could the district setting possibly attend to the rich and varied learning environments scattered across our district? The 21st Century Competencies document released in draft by the Ministry of Education states that in our classrooms there should be an “emphasis on “deeper learning” (which) requires a shift in the role of teaching from “focusing on covering all required content to focusing on the learning process, developing students’ ability to lead their own learning and to do things with their learning. Teachers are partners with students in deep learning tasks characterised by exploration, connectedness and broader, real-world purposes” (Fullan & Langworthy, 2014, p. 7)” (21st Century Competencies, 2018, p. 32). Pre-calculating a weight on a strand only works if my focus is on content, and the delivery of that content doesn’t change from year to year.

Temporary Solutions

I’ve developed a workflow in our resources to be able to look at the calculation and provide differential weights at a strand level, providing a mechanism for teachers to confirm how weight may alter the grade, and thereby feel more confident in the grade they are submitting. You can find it here:

One thing we have noticed is that often, once a teacher works through the steps to add a calculated weight to their strands, the resulting number is either consistent with the grade calculated in an equal weighting set-up (the difference between weighting a strand 5% or less than another, across three or more strands, is insignificant math, even when students achieve disparately), or is consistent with the grade they would have overridden when determining a final grade using their professional judgement. Additionally, the workflow is only beneficial when students are achieving inconsistently within the strands, as consistent achievement isn’t impacted by weight. Viewing how the students are achieving in the strands, and being able to view those percentages, can oftentimes be enough information to make a determination of what the grade should be on the report card, which can be done without resorting to a spreadsheet.

An Assessment Concordance

The ways in which our new grade book tool has been configured is an attempt to realize the Growing Success policy document within the confines of a technological tool. I’ve been in a number of conversations where Growing Success is used to support an argument. I’ve started annotating Growing Success with some of these ideas. You can find the content, and participate, at the following link: https://hyp.is/1kJGlhTiEem55m-Iky91pQ/www.edu.gov.on.ca/eng/policyfunding/growSuccess.pdf. You’ll need a Hypothes.is account to join the conversation, but then I think you need a Hypothes.is account regardless (not only because the tool is awesome, but because having our education policy documents marked up and annotated by a diverse group of educators would create a rich and powerful resource ).

Talking about assessment can sometimes feel like talking about religion. There are strong views, and deeply held beliefs. Growing Success has some nebulous areas that can be used to support a number of sometimes conflicting practices. Users want to have faith in the calculation provided by a grade book program, but ultimately the math will only take them so far. Believing in their students, and having a willingness to provide every opportunity for them to succeed, deserves significant weight.

Passwords

When companies with some of the best devs in the world reveal security and password issues, it soundly reinforces the need for a password strategy like a password manager, and reveals how ill-equipped most users are to preserving their privacy.

This is an image from the password reset notice from Twitter. It is decorative.

How many people do you know that use the same password for everything, including their banking info? How many who use multiple passwords store them in unprotected Notes apps, or written on post its?

How many times have you told students to use the same password they use for other things when signing up for a new tool, prioritizing remembering above safety? I know I have.

What does password management look like for elementary students, on shared devices, and how do we instill the concept of a different password for everything, knowing the question of a breach is not ‘if’, but ‘when’. How do we give them the skills to be better than us?

Tools like password managers seem to be one of the few strategies to allow this and keep one’s sanity, but the costs don’t scale when you are talking about a school board full of students.

Will password management eventually be recognized as a social necessity, leading to the emergence of open source responses akin to initiatives like Mozilla and  Let’s Encrypt? How robust are password managers in tools like Firefox?

Lots of questions, and a few password I need to go change.