Loading
Woodside Unicycle Mechanics

Woodside Unicycle Mechanics

by scspaeth | updated July 21, 2016

Dissecting unicycle practice to inform learning, teaching, and coaching. Modeling use of a reflective practice journal for project-based learning. We are unicycle mechanics.

Clothing_icon
Electronics_icon

1

Woodside School's mission commits its community to finding ways to meet the unique strengths of every student: "to develop the unique strengths of every learner for a lifetime of success." Vicky Dow, a fifth-grade teacher, and colleague Linda Koch chose to work toward fulfilling that mission by using an approach called Genius Hour ( http://www.edutopia.org/blog/genius-hour-essentials-personalized-education-nichole-carter ). The approach derives from initiatives used in modern businesses to engage workers by giving them opportunities to develop projects for which they care passionately. They wanted to find out whether they could engage kids more in school by inviting them to identify a personal passion and develop a project around that passion. 

One of the challenges of student-lead project-based learning is that students choose projects from a very broad range of disciplines. No single teacher or even small group of teachers can 'cover' the range required to support the disparate interests of all students. For example, when some kids expressed interest in programming video games, Vicky wanted to find ways to support the development of their Genius Hour projects. Vicky turned to me to find out whether I'd be willing to help mentor some kids who had chosen programming projects. I agreed to help where I could and started participating in Woodside's pilot of the Genius Hour Program. 

At one of the initial sessions, Ashlyn, a unicyclist and Lego/Robotics/Programming Club participant, asked me what my Genius Hour project was. She elaborated, "I know you are a scientist so you probably have a genius hour project, too. What's your project?" I told her that I liked her question and that I would consider it and find an answer. 

This BuildInProgress project is my response to her question. I have tried to develop a project that meets both my needs as a personally engaging project-based learning activity and helps kids to see how an engaged adult goes about learning to master challenging goals.

My project consists of three facets:

  • Hill climb
  • Mechanics
  • Reflection on participation in GH and extensions of it beyond school

I wanted to create a project that fifth graders at Woodside could understand and appreciate. The Hill climb part of this project gives a visual and easily interpreted version of setting a goal, working hard to achieve it, and sharing the success. Woodside School's motto: Models of success.

January 3, 2016 at 1:41 PM
Comments (0)

During my attempts to challenge the hill, I hit a wall at about three-fourths of the way to the top. This entry describes the test that I conducted to find out whether I could master the challenge. The blue square represents a different type of marker than the inverted tear-drop progress markers.

The progress log includes a screen shot taken of the Garmin Connect trace of my attempts to determine whether I could climb the last quarter of the challenge. 

You can explore more of the steps toward mastering the challenge at the following link to the project map: https://www.google.com/maps/d/edit?mid=z1mBhnmMJ_Hw.kZzw1RdfCxv0&usp=sharing

 

January 4, 2016 at 9:57 AM
Comments (0)

When I discovered the challenges of tracking features of interest for kinematic analysis of riding, I designed this approach to learning more. By supporting the unicycle seat from a tripod, I could track a dot without interference from feet and legs passing in front of the spot of interest.

The second image shows a composite of three representations of motion that I created using Plot.ly to display the data. 

January 4, 2016 at 10:36 AM
Comments (0)

In my quest to understand and interpret the results of my gyro experiments, I found that wanted to hook the abstractions to more familiar ideas. The Sensor Data program gives so many options of output ( http://wavefrontlabs.com/Wavefront_Labs/Sensor_Data.html  ), it is not easy to know which to use. I needed a scale for my kinematics plots that related to something I knew. So, I figured out how to get Plot.ly to add gridlines to my plot that would let me see the intervals of rotation.

I added gridlines spaced at intervals of 2π (e.g. 0, 6.28, 12.56, ...) to the lower plot. But when I saw the axis labels for the y-axis, I knew I needed a better representation. The Sensor Data program outputs angular velocity in rad/sec because it is a physics and mathematical convention. ( https://en.wikipedia.org/wiki/Radian#Advantages_of_measuring_in_radians  )

To scaffold my developing understanding, I can convert the angular velocity from radians/sec to revolutions/sec (rps) and thereby have a better sense of the connection between the abstract representations of the analysis and my physical sense of movement down the hill.

The second figure shows a revised version with improvements. [Tried to add link to live plot.ly version but it is not yet cooperating. https://plot.ly/~scspaeth/113/rotation-rate-and-distance/  Third screen shot shows that Plot.ly and I are having problems communicating. But it becomes the featured image in project display so I removed it. It would be nice to be able to specify which image represents a project. Now that the project loads, I find that it is no longer the plot that I used for the screen shot. Something got corrupted and the distance data are displaced above the limit of display for this chart. ]

January 5, 2016 at 10:49 AM
Comments (0)

Can the sensors in an iPhone be used to implement a simple odometer?

Smart phones have an impressive array of sensors and computing capacity. Initially, they relied on accelerometers to sense orientation so that displays could change from portrait to landscape with rotation of the device. As engineers devised new ways to use phones, they added gyro sensors and applications that depended on them. 

"3D Gyro – The iPhone 4 has a 3D gyro. The gyro data is integrated with the acceleration data to create Gyro Enhanced Motion data including:

The number of choices for orientation alone is daunting for beginners. Several entries in the bulleted list have their own pages at Wikipedia that are filled with diagrams, equations and links to explanations. So, I just started with simple rotation rate and studied patterns. As I started to understand the patterns, I added more data to refine my understanding.  

The contrast in patterns for the blue and red traces in Figure 1 for the 10 to 20 second and the 20 to 28 second periods led me to guess that the "Gravitational component of 3D motion" might be a useful form for my analysis of effort during hill climbing. So, I included it in one of my standard short runs. 

Figure 2 shows that the gravitational component of 3D motion complemented the rotation rate data. It matches the Acc_X data during pushing and is simpler during riding. 

 

January 6, 2016 at 12:53 PM
Comments (0)

I study bicycle science to see where ideas apply to unicycling. Figure 2.14 in Wilson's third edition of Bicycle Science led me to find and remix this simulation of linkages that I hope will help to improve my practice. https://books.google.com/books?id=0JJo6DlF9iMC&lpg=PA92&ots=TtYAAMhui2&dq=okajima%20%22pedal-force%22&pg=PA80#v=onepage&q=okajima%20%22pedal-force%22&f=false

See more here: https://scratch.mit.edu/projects/93311797/

Meta: I add this Scratch project because it may help fifth grade students to see their work on Scratch from a new perspective.

January 7, 2016 at 11:27 AM
Comments (0)

Mark Guzdial writes about computing education in his ComputingEd blog. I found a reference in a recent post to a body of work about Cognitive dimensions of notations. https://computinged.wordpress.com/2016/01/06/interaction-beats-out-video-lectures-and-even-reading-for-learning/  It seems to have parallels to my choice of a mapping tool for keeping a journal about progress toward my goal. One of their dimensions is mapping of the notation to the problem space: 

"Closeness of mapping 
How closely does the notation correspond to the problem world?"
https://en.wikipedia.org/wiki/Cognitive_dimensions_of_notations#List_of_the_cognitive_dimensions

My choice of a literal map for this project seems apt but I am also trying to find a way for kids to see the challenge more concretely than an aerial photo conveys. The image shows that the Water Tower on Oak Street is much higher than the surrounding area but it does not convey the challenge of the slope directly. The second figure shows a slight uphill grade followed by a steeper section but it too is challenging to 'see.'

January 8, 2016 at 1:27 PM
Comments (0)

My project to challenge the Oak Street hill, is part of the Woodside fifth-grade project-based learning experiment. Kids have all chosen individual or group projects and I help mentor them in areas that they need. 

But, I learn from them, too. One group is building vehicles. During a session, they tested a Lego pull-back car (Figure 1). They used twin pull-back motors combined from two sets. With twin motors, it could accelerate very quickly.  They discovered that they were able to accelerate it fast enough to cause the vehicle to 'pop a wheelie.' They also discovered that by changing the 'pull-back' distance, they were able to vary the acceleration enough to change the extent of the wheelie. The horizontal bars in Figure 1 are 'wheelie bars' that they added to limit flipping. 

Their exploration of wheelie mechanics lead me to reflect on their activity and to learn more about wheelies. I started to connect the phenomena to my efforts to understand unicycle mechanics. So I searched for information about wheelies and found a valuable resource at Wikipedia: https://en.wikipedia.org/wiki/Wheelie#Physics

The Wikipedia article included references to other sources that are also useful. I'm keeping an annotated list of resources on my Diigo site. 

January 13, 2016 at 11:22 AM
Comments (0)

During one of our sessions, one of the fifth-graders asked if I had a screw-driver. I pulled out a wrench and Allen key set that I carry for work on unicycles and asked what kind of screw-driver. He said that they wanted to open one of the Lego pull-back motors to see how it works so they'd need a fine Phillips-head. Since this is a well established approach to learning, I wanted to support it if I could. But I also wondered whether the spring a gear-train inside a Lego pull-back motor might be more sensitive to disassembly than we have the capacity to handle.

So the next time we met, I brought both a set of suitable screw drivers and a plan B. In the first part of plan B, I encouraged their interest and strategy by reviewing reverse engineering as a learning strategy. Immediately they contributed examples where they had used it. It turns out the at least one of them had already opened a Lego pull-back motor and successfully reassembled it. So, we concluded that the did not need to do it again. But they also told about other reverse engineering projects they had completed. I invited them to tell me more and to share their projects.

One student told about opening a padlock to understand the mechanism and recreating the Lego padlock illustrated here. 

In the second part of plan B, I shared a model of a pull-back motor that they could reverse engineer with Lego parts that are part of Woodside's Robotics kits:  http://www.isogawastudio.co.jp/legostudio/modelgallery_model/b044.html
designed by a Japanese Lego engineer and kinetic sculptor. We will need more time to work on projects ...

 ...

January 13, 2016 at 12:58 PM
Comments (2)
Hey Mr spaeth I just finished my handcuffs
about 1 year ago
Let's start a project so that you can share a picture of it there.
about 1 year ago

Wheelie studies.

At last week's Woodside Lego/Robotics/Coding club, one of the kids demonstrated his pull-back car (single motor). HIs enthusiasm and engagement with it convinced me to get a new Lego set that included one of the pull-back motors. With it, I built a spare (i. e. simple or minimal design) model that captures many of the features of wheelie control:

  • Wheel base
  • Height of the center of mass
  • Variable acceleration

I brought it to Woodside this morning for the before-school club. As soon as I sat down to work at one of the tables, a few kids gathered around and shared activities that they had worked on since the last time I saw them. A fifth-grader had built a safe with a combination lock and said that it was too big to fit in a back-pack and offered to take a picture to share with me. 

But they also recalled models that I shared with them last week, e. g. the giraffe unicycle. I said that I had remixed that model into a new form and they wanted to see it. So, I showed them the chain-drive Acrobot model. After a brief demo, they took over and tried to find solutions to the balance challenge. 

They also asked about my pull-back model. They didn't need a demo and started experimenting with it and suggested design changes that might improve it. I explained that some of my design choices reflected my interest in learning more about wheelies. I showed them how I planned to 'strap' a phone on it to measure wheelies that are too fast to see. They wanted to see how that would work so we found a couple of rubber bands and ran a test.

It made complete sense to them that there would be a smartphone app that would measure angles that would change with wheelies.* I forgot to bring the usb/iphone cable that is required for data transfer. Instead, I showed a fifth-grader the data table that is part of the app library function. He expressed surprise that it could collect that much data in such a sort time. He wondered whether it might be taking measurements every nanosecond. I told him that it was fast but not that fast; I had set the value for 20 measurements per second. 

I created the plot of acceleration as a function of time after returning to my lab. The tools for doing that are sufficiently user friendly that I suspect that this fifth-grader could do it and engage in substantive analysis of the results. I wonder whether we can turn that speculation into a testable hypothesis and experiments that we can find time to run.

*Meta: Apparently Apple's marketing campaign has worked to convince this generation that there probably is, "An app for that."
 One of the kids most engaged by this line of inquiry was in the fourth-grade class that I helped run force and motion experiments last year. He was so enthusiastic about our work that he asked if he could leave briefly to go an invite others to come and see what we had done.

January 22, 2016 at 9:39 AM
Comments (0)

After using the previous design, I modified it to facilitate exploration of lift during the pull-back process. I read further about the Smart Phone Mechanics Lab course from FUN that looks as if it may help with understanding and processing data from projects like this: https://www.france-universite-numerique-mooc.fr/courses/parisdescartes/70003/session01/about#

I brought the new design to Lego/Robotics/Coding club and shared the results with interested kids this morning. A third grader jumped at the opportunity to think together and explain to others the aims of the design. 

 

January 29, 2016 at 1:27 PM
Comments (0)

I've been trying to understand the motion of unicycles in various representations including Smartphone as a data acquisition device. I enrolled in the Smartphone Pocket Lab course: http://mooc.cri-paris.org/smartphone-pocketlab-session-1-en/  developed by Joel Chevrier, Professeur de Physique.

To get an overview of the course, I reviewed the approaches to each of modules. In the final module, Chevrier describes the application of Frenet-Serret formulas:  https://en.wikipedia.org/wiki/Frenet%E2%80%93Serret_formulas#Graphical_Illustrations  He has developed a application that allows learners to see an internal frame of reference. Seeing the example in Wikipedia's entry made me think that I had seen a similar representation recently. 

Last night, I recalled where. The image for this entry comes from a screen shot of Beetleblocks (a derivative of Snap http://beetleblocks.com/   ) Beetleblocks uses three.js to support the matrix operations to represent the rotations in 3-d space: http://threejs.org/docs/index.html#Manual/Introduction/Matrix_transformations   Inspecting the opensource code that runs Beetleblocks verifies that includes elements of the Frenet-Serret formulas:  https://github.com/ericrosenbaum/BeetleBlocks/blob/gh-pages/run/beetleblocks/three.js#L840

The second image shows that Beetleblocks can very simply demonstrate one of the classical forms for Frenet-Serret equations: the helix  https://en.wikipedia.org/wiki/Frenet%E2%80%93Serret_formulas#/media/File:Frenetframehelix.gif

 

 

February 3, 2016 at 1:55 PM
Comments (0)

The Beetleblocks tools ( http://beetleblocks.com/  ) help me to create a more comprehensible representation of the data coming from Sensor Data running on the iPhone. This 3-D plot shows the trace of the Beetle riding on the rotating wheel. The horizontal and vertical dimensions represent spatial coordinates. The depth represents time. 

In the Beetleblocks editor, you can rotate (orbit) around the structure to see the changes in direction at 2:00 (blue to lavender) and 8:00 (orange to red).  

February 7, 2016 at 3:46 PM
Comments (0)

Woodside Lego/Robotic/Coding club was cancelled last week because of the snow day. I continued to work on my projects in my personal Lego/Robotics/Coding/Unicycle Club. I suspected that other participants did also. Eric Pulsifer cancelled WOW practices because of the space conflict with the Woodside Musical. When I went to the first public performance of the musical last night, I saw kids for the first time in several days. I asked one of my Lego engineers about progress during the time away. He said that he worked on a project during the snow day and during the weekend. But he did not bring the project with him so he would share on Friday morning. 

Soon after I arrived at Lego/Robotics/Coding on Friday morning, he brought his latest creation. This engineer has found what "student writers need what all writers need: community." (Ken Martin) He designed and built a new form of lock that sits on the table in Figure 1. I encouraged him to talk through his engineering design as rehearsal for his writing about his work. He described his process of prototyping and then revising his design. In this case, Lego did not provide him with the parts he needed to implement the cross-bar (the more direct connection between the two blocks). So, he described the process of drilling through standard Lego pieces and glueing some together to create the required structure. This is a more sophisticated use of Lego than we commonly see in young users. 

His description of his design made it clear to me that he had spent considerable time at his personal Lego Club meeting. When I asked him about the amount of time that he had spent, he estimated seven hours on Friday (the snow day) and six hours on Saturday and a few more on Sunday. So, here is a case of kids investing a considerable amount of time engage in processes that the Next Gen Science Standards and we want to encourage but could easily be overlooked by regular school because of competing demands for time. GH and several opportunities to interact in Extended School time give me the opportunity to provide him an authentic audience for his work.

I asked him if he ever documents projects with photographs and/or writing. He responded sometimes and then proceeded to demonstrate. In the photo, he is taking a picture of the project and showing me how he can use a pocket gaming system to take pictures and transfer them to a computer (from SD card storage). Later that day at GH, he demonstrated how he could share with other members of his team, too. 

Meta: This is clear evidence for Connected Learning ( http://connectedlearning.tv/what-is-connected-learning  ) that we come to see and value when we engage with kids who are geeking out.

Resources:
WriteScience Template: 
https://drive.google.com/file/d/0B5PpTBXyaOy5b1JoX2VMUE5mSVE/view
Ken Martin: Rural Voices
http://digitalis.nwp.org/resource/586

February 12, 2016 at 1:36 PM
Comments (0)

I used a combination of Beetle move instructions and absolute displacements to model the motion of the wheel mounted smart phone. The first image in this step shows the curtate cycloid space-curve that models the motion. It is the large scale bumpiness that I am trying to represent.

But the second image zooms into the model and displays an unintended artifact. By putting the beetle's pen down, you can see the path that the beetle takes to each new location. it shows an artificial zig-zag path the the real motion of the phone does not take. This micro-scale bumpiness is unrealistic.

I suspect that I could break the intervals into smaller increments and reduced the magnitude of the artifact. But that approach would keep the underlying artifact. I suspect that there is a better alternative. Stay tuned ... 

Meta: I just spent half-an-hour writing the analysis and hypothesis suggested by the ellipsis indicated in the last paragraph. Chose to write it in the project notes for the developing iterations of the Beetle Cycloid project using Frenet-Serret Apparatus. Unfortunately, when I saved the project notes, Chrome indicated that WebGL had hit a snag and asked to reload the page to correct the problem. That snag resulted in my losing the writing. I hate having to recreate texts. 

Sequential application of beetle moves does not give the desired micro behavior that I want. After reading Rucker's, BYU Math 302's, and Sokolnikoff and Redheffer's descriptions of the Frenet-Serret Apparatus, I came to see the curtate cycloid as a good first approximation to the space-curve that the apparatus requires. So, I hypothesize that I can use the parametric equations for the curtate cycloid to calculate the beetle's location based on integrated values of angular velocity and then use the incremental values of angular velocity and the beetle rotation blocks to calculate the rotation. 

Resources:
Rucker's intro to F-S Apparatus:  http://www.cs.sjsu.edu/faculty/rucker/kaptaudoc/ktpaper.htm
BYU Math visualizations:  https://www.math.byu.edu/~math302/content/learningmod/trihedron/trihedron.html
Kreyszig on F-S Apparatus:  http://www.wiley.com/WileyCDA/WileyTitle/productCd-EHEP001850.html
Curtate cycloid: 
http://mathworld.wolfram.com/CurtateCycloid.html

February 13, 2016 at 9:28 AM
Comments (0)

In 'A bumpy ride,' ( http://buildinprogress.media.mit.edu/projects/3335/steps?step=17092  )I hypothesized that I could use the curtate cycloid as first approximation to the space curve that I am trying to model.

When I determined locations in the external frame using the curtate cycloid and then used incremental changes in angle to orient the beetle in the plane of rotation, I got both the intended undulations of beetle in space and eliminated the jagged micro moves that I was trying to remove (Fig 1). Zooming into the part of the curve that previously exhibited the most conspicuous anomalous behavior, shows that the new approach eliminated the jagged path (Fig 2). 

Figure 3 shows that the curtate cycloid space-curve also removes the unwanted motion in the case of stationary/idling on the unicycle. One advantage for using this data set is the compactness of the motion relative to normal riding (Figs. 1 and 2). In this case, I have added a small incremental change in the posY to improve visualization of the oscillation over time. 

Meta: One of the challenges for making these changes came from the need to match the units of angular velocity that come from the Sensor Data program (radians/sec) with the expectations of the sin and cos functions of Snap/Beetleblocks (degrees). Initially, for the stationary/idling data set. I got oscillatory motion along the X-axis but the Z-axis was constant. Seeing the F-S trihedron helped me to visualize that the motions and orientations.

February 13, 2016 at 1:47 PM
Comments (0)

The step 'A smoother ride' represented a big step forward. But it left me with the question of what's the connection between beetle/turtle geometry and the space-curve represented with parametric equations in the external frame. So, I searched for the combination: "turtle geometry" and "parametric equations" ( https://duckduckgo.com/?q=%22turtle+geometry%22+%22parametric+equations%22 ) and found a helpful introduction connecting the two ideas: https://eurologo.web.elte.hu/lectures/intrins.htm In the paper, professor Uzi Armon demonstrates the connections. Since Armon wrote his examples for turtle geometry (rather than beetle geometry) I chose to explore his ideas in Snap rather than Beetleblocks. I copied my project notes from my exploration below: 

2016-2-15 9:20 am
I used parametric equations to represent the curtate cycloid space-curve for my Frenet-Serret analysis of smartphone tracking. It solved the jagged motion problem that I had produced earlier using a combination of beetle moves and changes in the external frame. 
This morning, I searched for "turtle geometry" and "parametric equations" and found a full text paper that explores the connections between parametric and turtle geometric representations of curves. It uses the cycloid to illustrate the process.  
This project implements the turtle geometric representation of the cycloid. 
Resources:
https://eurologo.web.elte.hu/lectures/intrins.htm

For the image for this project, I stopped execution about 3/4s of the way through the second cycle and you can see that the turtle is 'backing' out of the final cusp. The screencast close-up shows the process in more detail. 

In his paper, Armon provides several other examples of translations between parametric representations of curves and turtle geometry representations. But they do not include the curtate cycloid that I need. So, I'll have to learn how to make that transformation.

"There are several additional problems connected to the mathematical aspects of intrinsic representations, that meantime have no solution. For example, although equation (19): s = a × sin(n× f ) determines the kind of the Hypo-Cycloid, according to the natural value n, yet ...  Additional question is: Does similar intrinsic representation exist for not simple Cycloids, like the Curtate Cycloid or the Prolate Cycloid?"

But this search  https://duckduckgo.com/?q=%22turtle+geometry%22+%22curtate+cycloid%22 finds only Armon's paper so it may require some work or be very challenging.

 

February 15, 2016 at 10:18 AM
Comments (0)

Many health and wellness advocates encourage most people to increase the level of moderate to vigorous physical activity to overcome the adverse effects of sedentary life-styles. A kinesiology group at the University of Illinois, shows that increases in physical activity can help to improve academic performance in elementary school students.

After reading these research reports, I wondered how unicycling in general and WOW practices in particular compare with such recommendations. The UofI group used heart-rate monitors to estimate the amounts of physical activity in their study. So, I borrowed a heart-rate monitor and used it to understand my personal activity levels while unicycling. I also used it with WOW members to see whether we could find an activity that would draw them into applications of science and math to an activity they enjoy. 

During one measurement session, an eighth-grade girl wore the monitor while riding in the Woodside gym. Simply riding in large circles around the gym elevated her heart-rate modestly. When she switched to stationary/idling (like treading water), her heart-rate rose even more. Since both activities occur on the level gym floor, I didn't immediately know why idling was so effective at elevating heart-rate. 

Yesterday, I think found an answer. Until recently, I have used color change of the beetle to help keep track of the passage of time. During my work to visualize smartphone tracking, I saw the opportunity to use color change to indicate other physically relevant properties. I chose velocity first because I could see changes in velocity with a subtle metric but I wanted others to see them too. So, I highlighted them with a color change that would make it easier for others to see and compare. 

When I applied this technique to the track of my stationary/idling, it clearly shows deceleration and acceleration at each end of the oscillations. The green beads show areas of relatively uniform motion. The red and blue beads at the ends highlight the deceleration followed by an acceleration in the opposite direction. Each deceleration and acceleration requires substantial force applied to the pedals that muscles produce. 

I suspect that the effort required to arrest motion in one direction and then increase motion in the opposite direct is responsible for the increment in heart-rate that we observed earlier. If we can verify this speculation, then it could impact our strategies to encourage WOW members to increase their levels of moderate to vigorous physical activity. 

Resource:
National Academy recommendations: http://www.nap.edu/read/18314/chapter/2
FITKids Program: http://engagement.kch.illinois.edu/node/111
U of I Kinesiology Research link: https://news.illinois.edu/news/14/0929fitkids_charleshillman.html

February 15, 2016 at 2:42 PM
Comments (0)

Now that I have made more progress in understanding smartphone motion, I am ready to invest more time to improve the design of support for my phone. In entrepreneurship, they have developed the concept of Minimum Viable Product (MVP). I our case, I modified that to focus on a Minimum Viable Solution. 

[Connect to Smartphone Mechanics Lab]

Several iterations of design to improve support for specific attributes. I started with a cardboard support because it was easy to cut and hold in place with duct-tape. Figure 1 shows the overall view. Figure 2 shows the orientation of the phone and the rubberbands that hold it in place. Figure 3 shows detail of the MVS for holding the bands.

Evaluation: The MVS worked to get me measuring quickly. But it hampered other parts of my work because

  • Floppiness of the quick design introduced artifacts of unknow magnitude.
  • I could not ride when it might get wet

Conclude it is time to devise an improved support system. 

February 17, 2016 at 11:49 AM
Comments (0)

Two of Ms. Dow's fifth-grade students chose to work on STEM during recess at Woodside One Wheeler Winter Camp ( http://woodsideonewheelers.org  ). One of them is working on programming projects using Scratch. I had helped him learn to use "When [    ] key is pressed" controls to move a rocket ship around the world they had created. He has developed proficiency in Scratch programming so I wondered whether that would transfer to beetleblocks ( http://beetleblocks.com  ), too. 

I showed him the screen shot in the step Vigorous activity and briefly described how it connects to stationary/idling. Then they opened new projects and started programming side-by-side. Within several minutes, they were moving the beetle around 3-D space as easily as they had moved the rocket ship in 2-D space. 

They enthusiastically explored various features of Beetleblocks. Twenty minutes later, they called to ask me to see their unicycle wheel. One had used rotate, move and text blocks to create an image that resembled the tangential spokes of a unicycle wheel. 

Finally, I helped them connect the tool to 3-D printing that we had spoken about earlier in another context. They started thinking about what kinds of objects they'd like to design and print. 

Reflection:

  • Kids chose this academically connected 'hard-fun' over recess. They spent most of the morning engaged in physical activity of riding and learning new unicycle tricks. So, it made sense to them to choose this connected learning activity even over recess.
  • The engaged learning they have done in GH projects in their regular classroom transferred seamlessly to an activity they do for the fun of it [fun(it)].
  • We wrapped up while they still wanted more and returned to unicycling thinking about ways to incorporate what we had learned. And each day, they came back for more.
  • During regular WOW practices, we focus on helping new learners and preparing for performances. During WOW Camps, we have more latitude to devote time to activities like these connected learning projects.

One of the fifth-graders observed that the curves in the vigorous activity post and the Beetleblocks dynamic representation was similar to an experiment that we conducted at WOW Camp previously where we rolled circus balls down 'V-ramps.' He reflected the following day, "On beetleblocks, we made something that looks like a unicycle and we compared it to a real one."  [Fig. 2]

February 17, 2016 at 4:52 PM
Comments (0)

Trying to get acceleration tracking working.
Took some fiddling to get it to work...

"If it is vertical rather than horizontal, it draws better." If you angle it down like this, then I could fill in the gaps."

  • Circle=Box
  • Box=Hexagon

"I'm going to test out my theory by resetting and trying again. Oh, I understand now! The screen give helpful hints. X, Y, Z show which way it moves." 

Prediction: 

"When we put the phone one the unnicycle, it might go crazy. Or it will work perfectly fine. It might be the exact same as the other program."

[We added several more sentences but lost between two entry methods.]

We learned today:

  • The wi-fi connection was not working at the start, so we refreshed the page and it worked.
  • A bunch of different ways to study the program.
  • If you scroll down and then up [in Excel], it redraws the plot so that you can see detail the is hidden in the final image.
  • The difference between the lab frame and the Smartphone frame tracks the smartphone frame and the lab frame tracks the frame of the computer [drawing the outer dimensions of the laptop display.]
  • [We learned how to manage entry from phone vs laptop so that we don't lose text that we added in the browser interface.]
  • [We didn't have time to revise the Excel representation of the data during our afternoon session. So, I extracted subsets of the data and plotted them to show the interesting results hidden in the forest of 'big data.'
    • The upper left inset shows the iPhone frame accelerations
    • The upper right inset shows the lab frame accelerations
    • The lower right inset shows the lab frame y-accelerations plotted against the lab frame x-accelerations. Further subsets would show that the curves are inward spirals that diminish radial distance as the angular velocity decreases.]
  • [While we worked, I noticed how important hand gestures were to thinking aloud and describing how things change.]

February 26, 2016 at 3:53 PM
Comments (0)

While fifth-graders worked on their 20% time projects, I split my time between observing them and modeling the activities that I do as an adult member of a community of practice. One of my additonal activities includes learning more about how to using smartphones to collect data and then analyze and interpret the results. I am concurrently taking the online course from FUN in Paris.
https://sites.google.com/site/iprofmeca/entrer-daimecaprof

The Smartphone Mechanics team has produced software that only runs on Windows machines. I don't routinely have access to a Windows machine. So, during WOW Camp, I asked Diane, coordinator of the Woodside Learnings Commons and school tech supporter, whether she might have a Windows machine that we could test as part of our 20% Project time. Since the library database and checkout system also runs only on Windows, she had an old Acer laptop running Windows 7 home edition that we could use for our explorations. When I got the original application running, a third-grader helped me to test it (Fig. 1).

The iProfMeca Software queries apps running on either Android or iOS devices and transfers data, and displays results in near real-time. It also supports saving the data as .csv files for further analysis and interpretation. Since the laptop also had an old version of Excel, I also used it to examine saved data. I even saved one of the experiments to an Excel workbook for future reference. When I took the thumb-drive home one night after camp, I was surprised to see that the Apple finder was able to preview the file and even display a thumbnail image of one of the plots I had produced. (Fig. 2) [Microsoft did learn new tricks.]

Figures 3 and 4 show that Google also supports hard fun in this space, too. https://plot.ly/~scspaeth/113/rotation-rate-and-distance/  also supports this work and includes features (e. g. data subset selection using click and drag on a plot) that I already miss in other tools.

 

 

February 27, 2016 at 4:20 PM
Comments (0)

When I returned to Ms. Dow's class on Friday after school vacation, I watched and listened to one of the fifth-graders who had worked on his GH project at WOW Camp shared it with his collaborator who had not attended Camp. Since he had devoted substantial amounts of time and effort and had opportunities to get feedback (from peers and mentor), he had made great progress. He told me that he had finished work on his project.

His collaborator responded to the demo/presentation in interesting ways. At first, he seemed a little rueful. He was impressed at the progress but also disappointed that he had not contributed as much to the completion of the project. He commented, "Hey, dude. Why didn't you invite me over to help work on it over vacation?" Meta: Think about that statement for a minute. Here is a fifth-grade student trying to figure out how he could have found a way to work on 'homework' during school vacation.

We helped him understand that some of the recent progress took place at WOW Camp and not at home. We also reminded him of contributions that he had made earlier to the development of the concepts, mechanics and to the artwork. Since class members are starting to make presentations of their first projects, I encouraged both collaborators to identify the contributions that each had made and plan a presentation that shows the process by which they had arrived at the final result. 

The image for this step may require some explanation to fully appreciate it. It is a code fragment of M.'s but not from the Scratch game they produced. It comes from the Beetleblock explorations M. did during WOW Camp. The 'text' block takes a string of text and turns it into a 3-D rendering of the letters. The default value for the block is the word 'Hello' as is common in computer programming circles. The default value did not suit M. so he changed it to something meaningful to him: 'M__AndR____Games'  His identification as a member of the M__AndR____Games team supports his use of team identity as an element of further explorations. Score one for student-led project-based learning!

February 28, 2016 at 11:43 AM
Comments (0)

Yesterday morning, I wrote in the step Parallel hard fun ( http://buildinprogress.media.mit.edu/projects/3335/steps?step=18106  ), that I missed some of Plot.ly's features as I wrangled my personal 'big-data' analysis. After using Plotly for several weeks, it seems natural to be able to click and drag over part of plot and immediately have the plot rescale to inspect a subset of data. So, I went back to Plot.ly to see how it might help me both understand and explain our results to others. I learned that Plotly had chosen to make their Javascript API open-source: https://plot.ly/javascript/

In this step, I describe how I am using the iProfMeca Acceleration Track and Plot.ly to analyze my data. First, notice that I pared the traces from six down to three. When you have such a great quantity of data, you can minimize distractions by suppressing them. I chose to hide the traces but not the legend entries to remind myself that they are there and available for more detailed analysis later. BuildInProgress's image display mechanism crops images that head posts. You have to click on the image to see it in the image viewer to see the legend and vertical axis. 

By clicking and drawing across the middle section of the previous plot, Plot.ly displays an expanded version of the three traces for the time from 15 to 19 seconds (Fig. 2). The x- and y- for the lab frame oscillate out-of-phase with a period slightly greater than 1 second. This form of out-of-phase oscillation is similar to the x- and y- components of circular motion in a plane. This connects to the left panel of Acceleration Tracker for the lab frame plots which displays the y- vs. x- components of acceleration in the lab frame as segmented circular traces: https://buildinprogress.s3.amazonaws.com/image/image_path/26181/IMG_02-26-2016-17-02-47.jpg?v=1456762978840

The y- component of acceleration in the iphone frame traces the envelope above the oscillating x- and y- lab frame accelerations (Fig. 2). 

The images included in this step are static downloads from the Plot.ly plot. You can try some similar manipulations at the live version of this plot: https://plot.ly/~scspaeth/140/acceleration-tracker-iprofmeca/

Figure 3 illustrates the need to develop interoperability processes. I restarted the smartphone sensor app thinking that it could analyze multiple runs of the experiment. Unfortunately, that resets the time so that the data set becomes less useful to Plot.ly. I had to laboriously delete the second run in order for to see what happened during one run. In most cases, it makes more sense to repeat the second and subsequent runs without stopping the app. Then Plot.ly's tools for selecting data work for analyzing separate runs.

February 29, 2016 at 10:05 AM
Comments (0)

I've made good progress in collecting acceleration and gyro data and plotting them to help develop my understanding. In the step Plotly.js I described using Plot.ly for some of that analysis. It's quick, simple and powerful. But it doesn't always support everything I want or need to do. So, I turn to Snap or Beetleblocks to analyze some parts of my work. 

Until this morning, I have used a cumbersome method to get access to the data. Snap provides an (http:// [   ]) block to read pages from CORS compliant sites. Admins at the Snap site set up their server so that it complies and you can access the test url that illustrates the principle. Unfortunately, most sites don't comply and unless you run your own server and are a skilled server admin, that approach is not available to most users. 

...

This morning, I read the digest of the discussion forums on Piazza for the current version of the course Beauty and Joy of Computing (BJC). Admins for BJC's Piazza site keep enrolling me in the current version even though I have not been part of that course for several semesters. But I still lurk because it helps me stay in touch with efforts to improve opportunities to learn computing.

In today's digest, an anonymous student asked,
"importing data 3/01/16 4:46 PM
Can you import lists/tables from Google Sheets or Excel and turn them in to lists? 
Thanks!"

Brian Harvey, one of the leads in the development of Snap replied with and explanation and a sample demonstration. In his explanation, he described this method for reading .csv files hosted on webpages. He demonstrates loading a file producing more than a million cells (Fig. 2). The Snap table display handles it nicely but it is so large that he notes that the project can't be saved after the variable is loaded.

Harvey also describes a way to load local text files into Snap variables using a context menu for a variable's stage watcher. It is the feature I need to make my work more straight forward. Unfortunately, it is not something that you discover easily.

The following snippet from Jens Moenig's history of Snap development, shows that the capacity to  load local text files into variables has been available since January, 2013.

130123
------
* Import / Export text files from variable watchers (context menu)
* Max. size of displayed text in CellMorphs and value bubbles set to 500 characters

https://github.com/jmoenig/Snap--Build-Your-Own-Blocks/blob/master/history.txt#L1398

I didn't find any reference to the feature in the release notes: https://github.com/jmoenig/Snap--Build-Your-Own-Blocks/releases/tag/4.0.5  I also searched Snap's entire reference manual ( http://snap.berkeley.edu/SnapManual.pdf ) to find any documentation of that feature. I used the terms 'data' and 'import' and could not find any explanation. I found lots of references to importing sprite costumes, sounds, and blocks. And I found lots of references to data but none to this idea. Harvey refers to an explanation of blocks for data handling in a unit of BJC. Does the BJC Chapter 4 go into more detail on this trick? I don't recall seeing it the last time I read that unit. Nor do I find it on a closer reading of Chapters 4 and 5:
http://bjc.edc.org/bjc-r/cur/programming/4-internet/1-web-data/4-scraping.html

How can we make 'tricks' like this more accessible? Subsequently, I found this discussion of importing text into Snap. Illustrates how the community gathers and processes input. https://github.com/jmoenig/Snap--Build-Your-Own-Blocks/issues/740

March 2, 2016 at 4:32 PM
Comments (0)

Notes copied/revised from a Snap project notes:

2016-3-4 10:30
iProfMeca uses matrices and transformations to represent the Frenet-Serret Apparatus. I am making progress on understanding and representing smartphone data using Plot.ly and where they haven't yet developed the tools, Snap. 
Jens recently released the major upgrade of Snap that includes table representations. I searched the Github repository for 'linear algebra' and found an exchange between a mathematics teacher and developers of Snap:  
https://github.com/jmoenig/Snap--Build-Your-Own-Blocks/issues/1031

Brian Harvey responded to the request with two examples of matrix operations: (item () () of [array]) and (transpose [array]). I recreate those here to see how they work with the new table representations. Figure 1 shows that they concisely represent some basic matrix operations.

Meta: A search on the Github repository for Snap lead to this thread of interest to me. Perhaps it is a strategy for addressing the question that I raised in the previous step 'Improving access.' 

March 4, 2016 at 11:13 AM
Comments (0)

An old unicycle came to me for repairs with a stripped crank-arm axle. While it is a model that has been eclipsed by new, improved designs, this particular unicycle has played an important part in the development of unicycling in our public school programs, so it seems worth the effort to recondition it. My sources told me that it was one of the first unicycles in the Gym Dandies program in 1981 ( http://www.gymdandies.org/  ) and has been used continuously since then to support their program. 

After considering how to change it for another axle/wheel assembly, I saw an alternative. The taper on the square shank from the time of manufacture was never great enough to fully seat the crank-arm (Fig. 1). For more than 30 years, this unicycle has used just two or three threads at the end of the axle to hold the arm on the axle. Efforts to pull the arm further onto the tapered shank left striations on the taper surfaces (Fig. 2). They are probably also evidence that mechanics had applied serious torque to the crank-arm nut.

I used a fine metal file to change the taper so that the crank-arm would seat deeper on the shank and expose more of the unstripped threads. Fortunately, the steel of the axle is soft enough for the file to remove material easily enough to remove the striations and enough of the shank to make the arm set deeper (Fig. 3). With the additional depth, I was able to catch more threads and secure the crank-arm. But the softness of the steel also makes it clear why the threads can strip more easily than desired.

While I can't project how many more miles, I anticipate that someone will get some good use of it. And others will appreciate its contribution to our shared love of unicycling. Thanks to James Shields for finding this gem and getting it to a new home. 

2016-3-8 10:40
Both bearings slide back and forth along the axle so that they would be easy to swap if I could pull the old bearing from the 'good' axle. The 'good' axle would add years to the functional life of this cycle. This morning, I felt a small displacement along the axle in response to pushing and pulling (by hand). With that sign of motion, I carefully used a lever to try to apply more force to get it off. I rotated the bearing to ensure that I didn't cant the bearing. It worked, I pulled the bearing so it is ready for replacing the worn assembly. I found the nylon(?) washers that protect the inner bearing race from wearing on either the hub or the crank-arm. 

2016-3-14 12:40
WOW ( http://woodsideonewheelers.org ) celebrated its tenth annual community show last Friday. We used that opportunity to recognize the longer history of unicycling in schools (Scarborough and Topsham) by presenting the reconditioned cycle to its original owner. The following morning, I saw a picture posted to friends online of the determined look of a Woodside second-grader riding his dad's old unicycle. The second generation of 'set challenging goals, work hard to achieve them, and become a model of success' will be exciting rides for all of us.

March 6, 2016 at 2:36 PM
Comments (0)

Initially, relative crank orientations helped me to visualize motion of the wheel. But more detailed analysis of cranking up the slope challenge requires more detailed measurements of absolute crank angles. I can establish an initial crank orientation but integration of angular velocity or double integration of accelerations to get position introduces error. And gyro-sensors inherently have drift. I want to find a way to measure absolute crank angles or at least determine an absolute orientation once per revolution. I think the complementary measurements of accelerometer and gyroscope make that possible. 

The new table tools in Snap (4.0.5 https://github.com/jmoenig/Snap--Build-Your-Own-Blocks/tree/4.0.5 ) make it easier to understand those calculations so I test them in this step.

Figure 1 comes from data collected using iProfMeca's Acceleration Tracker as in other steps in this thread. But this time, I use Snap rather than Plot.ly to control plotting. Snap supports this exploration better than current versions of Plot.ly because it allows me to select subsets (in time) of the larger data set.

By iteratively adding progressively more restrictive tests, I located the quadrant and then the interval in which acc_Y_lab crossed its axis. Figure 2 shows the phase-shifted oscillations with sprite stamps at the beginning of the interval containing the crossing event.

Figure 3 shows an alternate perspective on improving the determination of absolute angle (red < blue < green). It also shows how the new table views of Snap 4.0.5 make it easier to see changes in two-dimensional data sources. It also shows a polar-coordinate grid incorporated into a stage costume that is more appropriate for this data representation ( https://commons.wikimedia.org/wiki/File%3APolar_graph_paper.svg by Mets501 at the English language Wikipedia or CC-BY-SA-3.0 ( http://creativecommons.org/licenses/by-sa/3.0/ ), via Wikimedia Commons).

March 7, 2016 at 1:20 PM
Comments (0)

I'm working on understanding the rotation of my smartphone in three dimensional space. The FUN Smartphone Mechanics course, Snap and Beetleblocks are helping me develop intuitions. A news feed called another resource to my attention: Pixar and Khan Academy have collaborated to develop learning modules they call "Pixar in a Box" ( http://ww2.kqed.org/mindshift/2015/09/03/pixar-in-a-box-teaches-math-through-real-animation-challenges/ ). They try to show how mathematics can be used to solve real challenges in the 3-D animations that Pixar produces. Since I am trying to use mathematics to solve a challenge, I checked out their work.

One of their modules focuses on the mathematics of translations and rotations ( https://www.khanacademy.org/partner-content/pixar/sets ) so I jumped ahead to try that one. I want to learn whether these resources might help kids to answer the question, "When am I ever going to need this math?". When I tried to 'practice' what I learned, I found some of the instructions to be confusing. So, as the invites users to provide feedback, I submitted a suggestion for revising the instructions: 

  • 'The instructions say, "You MAY round your answer to two decimal places." The answer checking system requires rounding to two decimal places. So, it would be better to instruct, "Round your answers to two decimal places." '

Reflection: This step is a bit of an excursion from my exploration of unicycle mechanics but I appreciate the effort of Pixar and Khan to provide engaging resources that help any students to connect their interests to academic work and standards.

March 12, 2016 at 11:19 AM
Comments (0)

I've been trying to synch my kinesthetic sense of controlling the unicycle with the data that I collect using the smartphone mounted on the tire. The new table tools in Snap help to understand and manage the data sources. The flexibility of fine-grained selection of data to plot also helps. But despite considerable amounts of time devoted to the effort, I haven't been able to get the cumulative angle (derived from gyrometer output) to synch with the acc_X_grav and acc_Y_grav data from the accelerometer (with help from sensor fusion programs in the smartphone. 

My legs tell me that they decrease the angular velocity of the wheel during a relatively limited sector of the complete wheel rotation (one time for each leg). So, I'd expect to see the rotation of the virtual phone (the turtle) to change velocity in phase with that. Since gyroscopes are well known to exhibit drift (intrinsic and integration) I wondered whether drift might be causing my problem. But yesterday, out of frustration with the angular velocity integration, I turned to focus on using the acc_X_grav and acc_Y_grav as primary data for determining absolute angles. 

As the smartphone rotates on the wheel the gravitation components of x and y acceleration oscillate. But I saw that the acc_Y_grav exhibited inflection points that aligned nicely with the local minima in the angular velocity trace. They also are consistently located in part of the oscillation that I intuitively sense kinesthetically is the part of the rotation that I apply the 'brakes.'  

I could see the synchrony in this feature of the curves despite the drift that I still had not resolved. So, this result gave me the incentive to push more to understand the acc_Y_grav measurements. A phase-plot of the acc_Y_grav versus acc_X_grav formed a circle centered at the origin. Since the arc tangent function allows one to calculate the corresponding angle theta, I tried to use it to calculate rotation angles to compare with values coming from the gyroscope. Fortunately, Snap provides these functions as a choice in the math functions block. 

When I inspected the output of the atan function applied to my data, I recalled that mathematicians restrict the range of the arc trig functions to limited ranges. Since I wanted to compare with a source that was not so restricted, I needed to learn more about arc trig functions. 

By refreshing my memory about the inverse trig functions at Wikipedia ( https://en.wikipedia.org/wiki/Inverse_trigonometric_functions#In_computer_science_and_engineering ), I learned about the atan2 function that might help me calculate the desired result. Unfortunately, Snap does not provide the atan2 function as part of the math functions block (because it requires two inputs while other functions for this block all require only one input). So, I used the specifications on the Wikipedia page as a kind of pseudo-code and created a reporter that calculates the atan2 (with conventional orientations).

March 16, 2016 at 5:09 PM
Comments (0)

After I finished coding atan2 ( https://en.wikipedia.org/wiki/Atan2 ) using Snap blocks, I saw that it gave the correct output but it was cumbersome. Since I am primarily interested in using the function rather than nuanced programming, I searched the  Snap Github repository ( https://github.com/jmoenig/Snap--Build-Your-Own-Blocks/search?utf8=%E2%9C%93&q=atan2 ) to see whether someone else might have already produced more elegant code for this function. This search found just one result in object.js where Snap developers use it to control bouncing off edges ( https://github.com/jmoenig/Snap--Build-Your-Own-Blocks/blob/bbb5106879aeefa918b1a30c75aef94bc8a086a7/objects.js#L3636 ). 

    this.setHeading(degrees(Math.atan2(-dirY, dirX)) + 90);
    this.setPosition(this.position().add(
        fb.amountToTranslateWithin(stage.bounds)
    ));

So, I learned that Javascript developers had already coded atan2 in more elegant form as part of its math functions: Math.atan2(). And Snap developers have provided a Javascript block to access such functions. So, I decided to learn to use the Javascript block for this task. That decision led to a serendipity, I found a cause for the drift that has plagued me for several days.

Some math functions are fundamentally based on radians rather than degrees as measures of angle. So, programmers must sometimes convert results from radians to degrees or the reverse. For example, the Sensor Data program reports angular velocities from the gyroscopes as radians/sec. Since Snap sprites primarily use degrees, I had to convert Sensor Lab outputs from radians to degrees. 

So, when I programmed the atan2js block, I included the conversion from radians to degrees. Initially, I used the same conversion factor that I had been using for analysis of my data. But when I ran unit tests for this function, I did not get the expected results. As I worked through this bug, I discovered that the scaling factor that I used for the conversion of radians to degrees was wrong. So, in the final version of the atan2js block, I used another Javascript constant, Math.PI, to calculate a precise value of the conversion factor.

Analysis: When I first coded one of those conversions a few weeks ago, I used an approximate value for the conversion factor. I assumed that I could go back later and use a more precise value. Unfortunately, I recalled an incorrect value for the conversion. The correct value is (180/π) or 57.2957795..., or for my purposes 57.3 sufficed. But I mistakenly transposed the '7' and '3' and used 53 rather than 57. That difference represents only 92.5% of the angle increments. So, I had unintentionally introduced an artificial source of drift that made it seem as if my intuitions were out of synch with my data. 

Implication: When you have a computer to recall quantitative details, use it rather than fallible memory.

March 17, 2016 at 9:31 AM
Comments (0)

In his book, The Everyday Work of Art, Eric Booth ( http://ericbooth.net/the-everyday-work-of-art/ ) argues that we need to augment attention to 'big A' Art (famous works and artists) with our everyday experience of art in our lives. He calls on Teaching Artists to help everyone see the everyday works of art in their lives. 

For several weeks, I have been trying to find a visual representation of the data that I collection using a smartphone mounted on the wheel of my unicycle. Many of the steps in this Build In Progress project illustrate attempts to find such a visual representation of this quantitative information. I've tried a variety of tools and approaches. They helped me make progress but none of them captured the kinesthetic sense that I get while riding to the visual representation. 

Several days ago I shifted back to using Snap because of its greater control and flexibility. While I worked on my latest iteration of this effort this morning, I saw the similarity of my process to Eric Booth's advocacy. The criteria I use in making choices bear a remarkable relationship to everyday art. In this case, the oscillations peak in red while the zero-crossing phases are dark slow zones. 

While it will take more work to help others understand this 'work of art,' it feels like I'm on a good path toward that goal. I can feel the connection of the visual with my experience riding. 

2016-3-18 9:20
Added control to change color for negative angular velocity. Last night, I also came to see how gravity is a star pole for analysis of pedaling. The acc_X_grav and acc_Y_grav are unit values of the coordinates of the parametric equations for cycloid motion. Need to develop that idea.

 

March 17, 2016 at 1:07 PM
Comments (0)

The progress I made in using acc_X_grav and acc_Y_grav to create a visual representation that supports my kinesthetic experience, led me to reflect further on the meaning of measures of gravitational acceleration. Last night, I thought of the analogy to pole stars ( https://en.wikipedia.org/wiki/Pole_star ). In celestial navigation, pole stars play the important role of 'fixed' reference point. If you attend to the location of the pole star while moving, it is much easier to maintain a true course than it is by dead-reckoning. 

In my analysis of unicycle motion, it seems as if gravity serves as a kind of star pole or more abstractly, a fixed reference axis. If I can track the motion of my smartphone while attending to the location of my star pole, perhaps I'll find a better course. 

 

March 18, 2016 at 11:59 AM
Comments (0)

Generalized motion in 3-D space poses several analytical challenges. Martin Baker describes some of the challenges in his very helpful Euclidean Space site (e. g. http://www.euclideanspace.com/physics/kinematics/index.htm ). I find that Snap ( http://snap.berkeley.edu ) and Beetleblocks ( http://beetleblocks.com ) provide me with the tools to develop my understanding.

For a bicycle on a straight route, the paths of pedals of a bicycle and a smartphone mounted in the plane of rotation of the rear wheel to a first approximation, trace curtate cycloids. The comparable paths for the front wheel of a bicycle and unicycle are more complex.

In the step STEM Art? ( http://buildinprogress.media.mit.edu/projects/3335/steps?step=18751 ), I analyzed a simplified case. I presented a 2-D representation of motion of my smartphone mounted to the wheel. While I distorted trace to make the visualization more compact, it demonstrated that the phone's motion followed a curtate cycloid path to a first approximation.

That success led me to determine whether I could create a similar representation in three dimensions using the 3-D tools of Beetleblocks. As in the 2-D case, I plotted the Gravitational Accelerations for X- and Y- in the corresponding slots in the [go to x: ( ) y:( ) z: ( ) ] block and added the GravAcc_Z to the the z slot. Figure 1 shows that the 2-D case generalizes to 3-D. The parametric equations for a curtate cycloid in 2-D include terms for cyclic displacements and for translation. To create a more compact representation, I did not include the translation.

The red loop back represents the portion of the run after dismounting and turning 180 degrees to push the unicycle back up the slope. The radial trace at the end of the record was not part of the run but added to move the beetle out of the traces. 

While Figure 1 resembles a figure in the step "A beetle rides a unicycle" ( http://buildinprogress.media.mit.edu/projects/3335/steps?step=16774 ), the similarity is deceiving. In "A beetle rides," I used angles derived from measurements of angular velocity as an input to the parametric equations for a circle. I used the third dimension as a visualization tool to separate observations that would otherwise have been difficult to see because of overlap. The only explicit use of the observed values of angular velocity in Figure 1 of this step is to change color of the trace to indicate the spatial relationships of velocity changes. 

March 23, 2016 at 12:29 PM
Comments (0)

The success of my first steps from 2-D to 3-D ( http://buildinprogress.media.mit.edu/projects/3335/steps?step=18875 ), led me to learn more about measurements of Gravitational Acceleration (GravAcc_X, GravAcc_Y, and GravAcc_Z) and the connections with the Rotation Matrix measurements that Sensor Data produces. Wavefront Labs describes the outputs in their documentation ( http://wavefrontlabs.com/Wavefront_Labs/Sensor_Data.html ):

  • Attitude rotation matrix – The attitude rotation matrix is a 3x3 matrix ((m11, m12, m13), (m21, m22, m23), (m31, m32, m33)).
  • Gravitational Acceleration – Returns the gravity acceleration vector (x, y, z) expressed in the device’s reference frame – units of g.

For someone who is familiar with this field and conversant with some conventions that might be enough to get started. However, I am still developing my understanding, so it wasn't enough for me. So, I needed to explore. 

The values for the gravitational accelerations are periodic functions that range from +1 to -1 for X, and Y and a smaller range for Z. The X and Y accelerations oscillate out of phase by 90 degrees. But the shapes of the oscillations depend on whether I ride or push the unicycle ahead of me. Pushing the unicycle produces waves that resemble sinusoidal oscillations. But riding generates a sawtooth pattern for GravAcc_Y and broad-shouldered maxima for GravAcc_X. But the Beetle traces in 2D to 3D steps form a cylinder around the Z-axis. All of these observations provided a fertile field for insights. 

My earlier work using the parametric equations of rotation angles led me to think that I needed to determine the angles from the GravAcc_s. But the 2D to 3D step helped me see that I could use the GravAcc_s directly. Then it hit me, I needed to think of the GravAcc_s not as individual values but as a vector. The GravAcc_s have a geometric interpretation: the vector GravAcc represents the scaled (unit vector) location of the phone's sensor in Cartesian space. The individual components of GravAcc (e. g. GravAcc_X) represent the projections of the vector onto the external frame's coordinates. 

If that intuition were correct, then I expected that the measurements of GravAcc_s were connected with the elements of the Rotation Matrix, too. To test that intuition, I collected a new dataset that included:

  • Rotation Matrix (RMij)
  • Gravitational Accelerations (GravAcc_s)
  • and Angular velocities (RotRate_s)

Since the addition of the Rotation Matrix more more than doubled the quantity of data that I had been collecting, I chose to decrease the sampling rate to 1 Hz (once per second) and to test several rotations: pitch, roll and a yaw variant. This produced a data list of lists (Beetleblocks have not yet merged the new table representations that were recently released for Snap) of 16X22. 

I used Snap to plot the values of GravAcc_s against entries in the Rotation Matrix and found that they matched (other than inverted sign) the values of one column of the Rotation Matrix. Figure 1 shows the effects of the low sampling rate as blocky (rather than smooth circular) traces. I adjusted the color of traces to correspond to the revolutions:

  • Pitch: yellows
  • Roll: greens
  • and Yaw*: blues  

Reflection: Wavefront says that Gravtational Acceleration returns a vector but my understanding at the time did not support my interpretation of that terse description into the meaning it now has for me as a somewhat more knowledgeable user of the technology. I have searched widely for other explanations that might have helped me at that stage of learning and so far cannot find any.

*Since I wanted to compare the GravAcc_Z, I chose to roll up to a vertical orientation and then rotated the smartphone around the Z-axis and then rolled back down to a horizontal orientation. Note that Figure 1 represents that Z-axis vertical rotation as a Z-axis horizontal rotation. This will require further work to resolve.

March 25, 2016 at 2:26 PM
Comments (1)
Timestamp,RM11,RM12,RM13,RM21,RM22,RM23,RM31,RM32,RM33,GravAcc_X,GravAcc_Y,GravAcc_Z,RotRate_X,RotRate_Y,RotRate_Z
1.001029,0.999993,-0.000761,-0.003615,0.001503,0.977760,0.209721,0.003375,-0.209725,0.977755,0.003615,-0.209721,-0.977755,-0.615010,-1.007187,-0.132773
2.000946,0.994689,0.070720,-0.074779,-0.102910,0.694710,-0.711890,0.001605,0.715805,0.698299,0.074779,0.711890,-0.698299,-1.018981,0.026645,0.726317
...
19.001187,0.934981,0.055610,-0.350312,-0.008877,0.990992,0.133624,0.354588,-0.121827,0.927052,0.350312,-0.133624,-0.927052,0.108894,-0.987486,0.459038
20.000424,0.996296,-0.065212,0.056060,0.059360,0.993178,0.100372,-0.062223,-0.096672,0.993369,-0.056060,-0.100372,-0.993369,0.060959,0.023667,-0.114315
about 1 year ago

I like these representations of the data because they connect my physical sensations (kinesthesia) of riding with the abstract representation of motion (kinematics). This run comes from a short ride (~7 wheel rotations) down our inclined driveway. Just before reaching the end of the run, I applied force to the rising back pedal to prepare to stop. I can see evidence of that braking action in the 3-D trace of the GravAcc vector. I described it in the project notes of the Beetleblocks project:

  • 2016-3-26 10:25
    Replotted the track of the beetle using a Z-axis increment to stretch out the overlap. Then oriented the 3D plot to make the kink more prominent and describable. The beetle points toward the Z-axis label and the kink lies half-way between the beetle's nose and the Z label. 

While the beetle traces its path in this representation, you can see it slow down during that kink. I tried to capture a screencast of that change in angular velocity to include it as part of the description of this step. But Quicktime seemed to bog down and be unable to capture the screen at the same time that the Chrome Browser was busy producing the 3-D rendering of the track. 

Meta: Build In Progress uses hints to encourage reflective practice. While I wrote this part of the step BiP offered, "Did you face any interesting challenges at this step?" - I 'solved' the challenge to making a screen recording of what I described above by using a cell phone to capture the screencast. The quality of the image is not nearly as good as a Quicktime screen capture but the phone captured the process when Quicktime could not.

After WOW practice this afternoon, I walked out through Family Focus. A. said that she brought her laptop today. So, it seemed like a great opportunity to load the Makerbot Software to test the transfer of STL files (e. g. the STL corresponding to Figure 3). After she got the application running on her laptop, we tried to get a sample STL file. When we used the link in the comment for this step, It wanted her to create a ThinkerCAD account. I told her that we could use an alternative transfer method. I am adding a design file to this step to explore this alternative. She used the delay to explore other sources of design files and tutorials for use of the application. 

Reflection: The congruency between physical sensations and abstract representations help me to see that I'm on the right track and need to push through the challenges of understanding the abstractions. 

 

March 26, 2016 at 11:02 AM
Comments (1)
I wanted to explore the creation of manipulatives to provide yet another medium for thinking with one's hands. I created a ThinkerCAD account to see whether I could export some of these 3-D models. Here is a link to one:
https://tinkercad.com/things/9VjKDnoGUyg
about 1 year ago

With the transition to performance preparation, Eric urges WOW members to develop new tricks and routines to share with others. At yesterday's intermediate practice, he set up the video projection system and showed kids videos of tricks they could develop, e. g. the kick-up mount: https://www.youtube.com/watch?v=RG5CLZL57_w 

A group of about half a dozen kids worked on learning or relearning the kick up mount for a substantial fraction of the open-practice time. Curtis had learned how to do this mount during one of last summer's WOW Camp sessions. It took him several times to refresh the skill and a while longer to get it consistently. After he got it, he coached others in how to improve their technique in order to get it. In this case, he offered some counter-intuitive advice, when you are trying to get the second foot on the pedal, "don't use your foot to find the pedal but let the pedal find your foot."

Shortly after he gave that recommendation to kids who had never gotten it before, some started to get the trick. By the end of the practice time all had improved and some had started to get it more consistently. 

As I watched this work, I saw in it a case for extending my analysis of rotational matrices. Up to this point, I had focused on motion within the plane of rotation. But in this case, the motion included a substantial and essential out-of-plane rotation, too. So, here I simulate the motion (because I can't yet do the kick up myself. 

 

March 29, 2016 at 4:20 PM
Comments (0)

TERC and the Institute for Learning Innovation invited interested parties to participate in an online forum, March 30 through April 8, to discuss the integration of mathematics with making and tinkering experiences in informal learning environments.  http://www.instituteforlearninginnovation.org/institute-news/march-18th-2016 
Since that topic describes an important facet of our work in Woodside Unicycle Mechanics, I read the white-papers produced for the project and joined the forum to learn more about the project. After introductions, Scott Pattinson, moderator, posed the prompt, "Do you believe mathematics is integral to making and tinkering experiences? If so, give examples. If not, why not?"  http://www.informalscience.org/prompt-2-there-math-making

While my development of the kinematic sculpture unambiguously uses math in the making, was it also an important part of the student's part in the collaboration? That question prompted me to capture this idea from a recent collaboration with an eighth-grade student to develop our capacity to print kinematic sculptures. Beetleblocks supports the creation of 3-D structures that can be exported in file formats that can be printed. The student installed a copy of the MakerBot printer application and imported the .stl file. When the program initially displayed the object, it had an orientation and size that required substantial transformation. The student used the application to rotate, translate and scale the object to suit our intention to print a small test piece.

Figure 1 shows one of the ways in which the interface design of a 3-D design application helps to guide the design process and make the math more explicit. During rotation of an object around one of its Cartesian Axes, the application displays a protractor that highlights the change in orientation that the designer is making. This image comes from the TinkerCAD (browser-based application) but the MakerBot desktop application that the student used includes a similar tool. The MakerBot application also includes other interface elements that help to make the connection to math (e. g. the scrollable preview that simulates the printing process). 

Figure 2 illustrates the iterations required to improve this explanation. Initially, I used a convenient source for the image but was not happy with the quality when I saw the post displayed on BiP. So, I returned to the source and recaptured a more satisfactory image and labeled it Figure 1.

April 3, 2016 at 3:14 PM
Comments (0)

I went to WOW practice a few minutes early to set up for the session. On my way in, I saw A. waiting for announcements so that she could go upstairs to help setup for Family Focus ( http://www.familyfocusme.org/services/school-aged-program/ ). She said that she had something to show me. She dug into the zippered pouch in her backpack and pulled out a small grey object. She had worked with her Technology Education teacher at the Middle School ( http://mam.link75.org/?sessionid=66a82a227857010a4fda3c3aef2b3673&t ), rescaled the object further to reduce the size and print-time. Then they printed it in 1/2 hour.

Since FF staff needed her help to prepare for the afternoon, we didn't have enough time to talk through the process. So, I quickly snapped a photo and left the sculpture with her to use for development of a BiP step. Figure 1 shows the printed object. It is about 1cm diameter and 1cm high. The photo does not show enough detail for thorough inspection. They chose to print without rafting or supports. The first few turns of the spiral showed some distortions that need further inspection. 

See more details for this project at the ThinkerCAD site: 
https://tinkercad.com/things/9VjKDnoGUyg

April 8, 2016 at 9:29 AM
Comments (0)

For the last several days, I have taken slight diversion in my project work to investigate something that seems unrelated but that I am discovering is connected at a deeper level. My mother is writing memoirs. She invites others to read and comment. Several months ago she wrote about Wabbaquasset Woods, a retreat in NW Connecticut that our parents bought with two other families in the late 1950s. Her piece focused on two important features of that idyllic place: Walter Bennett and water. 

As I had with another piece she had written, I added a picture to help readers see the piece in another way. I chose a satellite image of the location taken from satellite view in Google Maps. She also shared it with another of the co-owners of the property who responded with her perspective on some of the writing. The email also contained Mom's opinion that the image I had supplied was probably in the vicinity of our property but not it specifically. She expected to see some features in the photo and when she did not see them, she concluded that it was another location.

Since I had used other criteria, I was confident that I had the correct location. So, I created a map using MyMaps.google.com and added features that would help her to see the figure in the ground. Since it is relatively easy to add features, my simple goal to demonstrate the ground-truth for this image, turned into a larger project. 

When I added the long unused stone bridge abutments across Bungee Brook, I thought back to things that Walter Bennett had told us about the origins of this abandoned roadway. Mom had written in her memoir that Walter told us that this former road had been used by Thomas Hooker in his relocation from Boston to establish the Hartford and subsequently Connecticut Colony. Initially, I wrote conservatively/skeptically saying that 'local lore' places the Hooker party at this location.

So, I searched the Internet for information about what historians believe is the real path for Hooker's party. Quickly, I discovered Jason Newton's work on The Old Connecticut Path. Apparently, local lore is consistent with a larger body of evidence:
https://sites.google.com/a/oldconnecticutpath.com/oldconnecticutpath/move-1

Developed a technique to compare historical maps with current maps and imagery. ...

...

Added .stl and .obj files for an Earth Science test print that we can explore with middle school students.

April 12, 2016 at 10:17 AM
Comments (1)

Ms. Dow asked me to help mentor programming projects because kids chose projects outside of her comfort zone. When I arrived, I learned that several kids had chose programming projects. One, wanted to create an app for smartphone. We had worked together on science and math of unicycling during WOW summer camp and she wanted to extend that work. So, I expanded my mentoring responsibility to her project, too.

I had used AppInventor with Mt. Ararat Middle School Students when it was still a Google project. We used it to control NXT Robots. But I hadn't used it much recently so I had to brush up on my understanding of the current state of the art. I introduced her to MIT's AppInventor site and helped her find some of the tutorials. When she looked at the existing tutorials, she said that they didn't help her to reach her goal to connect to unicycling.

She quickly discovered challenges in understanding AppInventor that exceeded my ability to recall how to program the interactions. When I couldn't do it during the one-hour-a-week, I went home and fired up AppInventor to refresh my memory. Unfortunately, I discovered that AppInventor no longer worked on my machine. I could open programs and inspect them but I could not connect my phone to test the programs (sensor-based programming requires a phone and does not work under emulation). So, she and I have been a little disappointed that system challenges prevented me from helping more.

But Google recently told me that it would no longer support upgrades to my Chrome browser because Apple no longer supports my operating system. I need Chrome to support my work on Beetleblocks so I bit the bullet and upgraded to El Capitan. The impact of the upgrade has been mixed. I lost the ability to serve webpages; they took away the gui for sharing and require terminal-based configuration of servers. Yes it's there but not how I want to spend my time.

Woodside's upcoming Robotics workshop gave me the incentive to try AppInventor again. Apparently, the upgrade to El Capitan restored some settings that AppInventor needed in order for me to make the connection to my Android phone. Now, I hope to be able to support our App programmer and perhaps for her to help me teach adult helpers in parallel with Maine Robotics.

May 19, 2016 at 11:32 AM
Comments (1)
Glad to hear you were mostly able to get your AppInventor apps to work again. I have a lot of sympathy for developers after experiencing problems with upgrades myself – with the Build in Progress mobile apps and other projects.
about 1 year ago

The USGS has developed a platform to engage citizens. The National Map Corps ( http://nationalmap.gov/TheNationalMapCorps/ ) solicits contributions from citizens to help improve the national and local information infrastructure. They build on concepts from Open Street Maps and other open source projects. I have been learning about their work as a result of following leads from the NSF Video Showcase ( e. g. http://stemforall2016.videohall.com/presentations/664 ).

This morning, I stumbled across this tool that addresses one of the goals of this thread of my project: How can I represent the hill challenge to fifth-graders? This mapping interface ( http://viewer.nationalmap.gov/example/maps/pqs_profile.html ) allows me to draw a transect on a USGS Topo Layer and produce a chart of elevation change along the transect. The interface also produces a table of values that include the lat/lon/elev values at each of the points on the transect.

The table of values appeals to me because I want people to see data they have 'collected' and wonder "what other representations can I create?" So, I tried to copy the data table from the page and discovered that the interface renders in it in an inaccessible form. I tried several ways to capture the data with selection, page source, and browser console. So far, none of these approaches worked. Wonder whether this is intentional because of licensing agreements with other parties. See the attribution annotation at the top of the screenshot of the data table.

Meta update: Persistence pays dividends!
I opened the Firebug Inspector and found that the data table has an html div and each of the cells within the div contain the numeric values of lat/lon/elev. So, I could conceivably capture the data but there must be a better way!

 

May 22, 2016 at 9:55 AM
Comments (0)

In the step 'Everyday work of ...' ( http://buildinprogress.media.mit.edu/projects/3335/steps?step=19463 ), I included a screen shot of an historical map superimposed on a Snapi map ( https://github.com/bromagosa/Snapi ). That simple example convinced me that I needed to enhance my capacity for georeferencing to analyze an historical map challenge. So, I examined various tools including some browser based tools. ( e. g. http://mapwarper.net/ )

During my explorations, I kept seeing references to QGIS as an open source tool for mapping ( http://qgis.org/en/site/ ). The NSF Video Showcase included an entry ( http://stemforall2016.videohall.com/presentations/664 ) that showed upper elementary kids engaged in problem solving using QGIS. This project has developed curriculum resources that intersect with some of our work. So I decided to install the desktop version and see if it would support my georeferencing challenge. The Mapbox educational resources gave these tutorial instructions ( https://www.mapbox.com/help/georeferencing-imagery/ ) to facilitate use of their resources.

The image for this step shows an old map of Woodstock, Connecticut superimposed on Open Street Map imagery ( https://www.openstreetmap.org/#map=12/41.9631/-72.0624 ). While I have more to learn about how to use these tools, this represents a big step forward.

May 30, 2016 at 2:52 PM
Comments (0)

Today is a professional development day for MSAD 75 teachers. So, I am engaging in personalized professional development to support projects with which I am working with Woodside staff.

In this step I learn more about using QGIS to support the georeferencing challenge. I reloaded yesterday's file for superimposing the Woodstock 1883 map over the OpenStreet Map base map. While I could see and manipulate the old map, I could not get the OSM base to render in the window. So, I clearly have more to learn about layers and web tiles.

But the best way I know to learn more is to try variants of the process to get more experience. So, I decided to reference two old maps to one-another. I loaded the Thayer lithograph of Woodward and Saffery's 1646 map and georeferenced the Woodstock 1883 map to it. Now, as I type this, that seems as if it might not have been the best choice because it may overwrite referencing files that positioned the Woodstock map on the OSM base. But I only used a small number of points and it should be simple to recreate.

The image for this step shows the approximate referencing of Woodstock 1883 to the Thayer version of Woodward and Saffery. I can't position precisely at this scale because there are no authoritative references on the Thayer map other than the 41 degree 55 minute line.

The second image for this step shows detail in the image referenced maps that I am trying to achieve for this challenge. This works because I used large file sizes for both images and don't have to contend with transport for display on the web.

The third image in this step shows one of the challenges I encountered using BuildInProgress to document this work. Tiffany asked if I would mind documenting an error inducing sequence that disrupts my work in BiP. Here are the steps that led t the error:

  1. Click on a parent post and then add a new post.
  2. Add the title.
  3. Write the body of the step.
  4. Add the first image from a thumb-drive transporting it from the laptop running QGIS.
  5. Save the step.
  6. Get this response (screen shot of BiP error message) to the save request.
  7. Copy out the body of the text so that if something goes wrong, I don't have to recreate the text from scratch.
  8. Take the screenshot to document the error.
  9. Press 'leave the page' button.
  10. BiP saves the work as a new step.
  11. Open the step and continue to add text and images to the step.

May 31, 2016 at 10:36 AM
Comments (5)
Tiffany:
I encountered the problem again this morning in a session in which I attended to the sequence of events that led to the error message. I documented it as part of this step.
about 1 year ago
Thanks for the info! Do you mind sharing what browser you're using and on what operating system (Windows, Mac, Linux, etc.)?
about 1 year ago
Macbook OSX 10.11.4
Firefox 46.0.1
But I have also experienced it on Chrome but not documented it as carefully.
about 1 year ago
I was able to replicate the issue in Firefox and it should be fixed now! If you run into any other pesky pop-ups, let me know.
about 1 year ago
Thank you. Always happy to help improve the tools.
about 1 year ago

Rob Horne, Woodside Teacher, responded to a Woodside parent's request to expand robotic opportunities by enlisting the expertise of the Maine Robotics program ( http://www.mainerobotics.org/workshops--open-houses.html ).

Since I already knew how to do most of the ideas and skills that Maine Robotics typically cover in this introductory workshop, I spent my time observing kids, parents, teachers, afterschool staff, and the Maine Robotics staff. But I also used the time as an opportunity to engage in parallel personalized professional development and prototyping. I extended the work that I described in the previous step, 'Mentoring by example.'

I remixed my pitch-roll orientation sensing app into a roll vs. time charting app. After I finished, I searched around the room to see if any of the robotics teams had created a bot suited for testing with my new app. One of my WOW, Genius Hour, and Family Focus collaborators had created one that caused the bot to displace itself out of the plane parallel to the floor. So we put my phone on his bot and ran the test. Figure 3 shows slight change in roll induced by his mechanism. But it also shows clearly that I need to add a sensitivity control in order to properly display the connection between the physical and abstract representations.

I shared my work with Rob and he invited me to work with some other kids. I did with another WOW member but the design did not accentuate the motion required to demonstrate the connections. So, today during the second half of my MSAD 75 PD, I found a way to design a robot that I hope will accentuate the motion necessary to make those connections clearer. The fourth figure in this step shows the eccentric element added to a Lego wheel so that it will induce roll of a robot. I had originally tried to use more conventional Lego construction but the Lego plastic slide across the surface rather than lifting the axle over the extension. The rubber bands holding the part in-place should help the bot to power over the lifter.

-----------------------------
Meta: testing the order of events in creating a BiP step. Here, I test adding title and text and waiting to add an image during subsequent edits. This text-only submission did not produce an error but did require a browser refresh in order to get the step to appear in the step-tree. I added images via the BiP Android app but it quit when I tried to add several images from the camera store at one time. Backed off and it took three-images without a problem.

Meta 2: I notice during the writing and documenting of these PD steps, I find it much easier to produce these posts documenting my work than it is during activities with kids or adults.. While I was able to concentrate enough to make serious improvements in my AppInventor code for the app, I did not document my process (other than to take pics and video of activities). In retrospect, I should have taken up Rob on his offer to present so that others could have a better understanding of my parallel work.

May 31, 2016 at 12:10 PM
Comments (0)

STEAM morning. P.L. trimmed the bristles fore and aft and produced an interesting pattern of motion. It rocks forward and then leaps ahead.

Some kids attended to organizing the race sign-up and others to making the race-track more colorful.

"Comon, let's help to clean up. I'm going to make some more when I get home!"

June 1, 2016 at 9:12 AM
Comments (0)

Kids built Artbots with minimal guidance and maximum emphasis on figure out what you want to do.

I observed that they generate interesting patterns in the traces but that it is challenging to get kids to see them, dissect them and reflect on what they represent. So, they definitely are engaged and having fun but how does it connect to other kinds of learning?

June 1, 2016 at 10:32 AM
Comments (0)

This is the most direct student-generated connection to standard math curriculum. He tried to find a way to fix the end of one leg. At first, he used his fingers to stablilize the pencil. Then he punched a small hole in the paper and set the pencil in it. The pencil constrained motion of the Artbot to move in circles, as in a drawing compass. The distances from pencil to marker tips determined the radius of the circles drawn.

June 1, 2016 at 10:57 AM
Comments (0)

I'm trying to learn how to capture data from an image layer in a web map. At first, I tried using ArcGIS online but discovered that I could draw line segements, I could not export them to use them in another context.

So, now, I am trying to use OpenLayers tools (e. g. http://openlayers.org/en/latest/examples/draw-features.html ) to see whether I can capture points and then transfer them to another context, e. g.http://geojson.io/ The OpenLayers Example above provides some of the functionality I'd like to have. And an OpenLayers Workshop resource ( http://openlayers.org/workshop/controls/draw.html ) provided me with a snippet of code that I can insert into running page by adding it to the Firebug web console:

draw.on('drawend', function(evt){   var feature = evt.feature;   
var p = feature.getGeometry();   
console.log(p.getCoordinates()); });
//Object { lg=false,  target=kv,  type="drawend",  more...}
//[-8061966.247294113, 4641501.194832124]

The last two lines are output from the drawend event. I can also capture Coordinates of a lineString. And I can transfer them to my GeoJSON map using the GeoJSON editor functions. But the coordinates are currently in units of image geometry rather than longitude and latitude that I want. But this is progress. I recall seeing an example that output latitude and longitude  of mouse click to a popup window. So, I'm off to find that example to see how they did that.

Update: I found the mouse position example here: http://openlayers.org/en/latest/examples/mouse-position.html And I show how I used it in Figure 3 of this step. But it requires an understanding of projections that I have not yet mastered. Guess I have a new sub-goal.

Update 2: After reading about map projections ( https://en.wikipedia.org/wiki/Map_projection ) I understand the they are most important in cases where curvature of the earth is a critical element of the representation. Since I am primarily interested in a relatively short transect, I can probably get by with a simple transformation for a first approximation rather than an elaborate (though technically more accurate representation) transformation. But a search on one of the projections "EPSG:4326" led me to http://epsg.io/about that may help me to explore this topic.

Meta: These open source tools and resources are surprisingly useful. It takes some work and persistence to get the results I want but they illustrate the idea that software engineers put more powerful tools more widely accessible.

 

June 2, 2016 at 12:41 PM
Comments (1)
This resource at Mapbox indicates some of the ways in which their tools can be used. It helps me to gain some perspective on our work in this thread:
https://www.mapbox.com/industries/naturalresources/#drones
about 1 year ago

The New York Public Library has developed a new service to increase access to its valuable historic map collections: http://maps.nypl.org/warper/  They have digitized a growing number of maps and have created a browser-based map warping tool that allows users to geo-reference the historic maps to current digital maps.

Figure 1 shows the map and a warped version displayed in the MapWarper interface. This example makes it abundantly clear that maps can warp in mysterious ways. I just used the existing 6-point geo-reference for this map. I have not yet found a way to examine the choices of the mapper who did this geo-referencing. I wonder whether it is possible to have multiple sets of geo-referencing data so that researchers who focus on particular locations can optimize the map for the context of interest.

Figure 2 shows that the DigitalCollections project encourages remixing of their maps by providing a variety of ways to access the data including KML, WMS, and OSM Tile Layer services. Figure 3 shows that the service can be used by the http://geojson.io mapping tool. This map shows some of my GeoJSON points-of-interest superimposed on the NYPL historic map. It also indicates a small bug in the MapWarper interface. The Layer selection tool in the lower right corner of the map frame shows two layers. The first time I tried to add the layer to this map, I used the pull-down menu shown in Figure 2 as a yellow-bounded box. The url supplied by that part of the interface did not work. I found a url that did work in the 'Export' tab.

The Digital Collections staff provide a contact email address and support comments on specific items for registered users. But I haven't found a process for reporting bugs.

Update (a few minutes later): A search for nypl "map warper" took me straight to a GitHub repository for this project: https://github.com/nypl-spacetime/nypl-warper  Now, I've got a best-practices approach to reporting the bug described above. 

Figure 4 shows that the the iD Editor for http://openstreetmap.org also supports the superimposing of NYPL map layers. Since Open Street maps are oriented to current use, I'm not yet sure of a context in which this capacity would help but I will keep it in mind as I continue this work.

Update (a few hours later): I reported the bug in for the Tile Layers service at the repository on GitHub and received a thanks and closed response that arrived while I was coaching at Woodside: https://github.com/nypl-spacetime/nypl-warper/issues/64  Apparently the comment field bug will take some more time.

June 6, 2016 at 9:31 AM
Comments (0)

After making art bots kids made art, math and science together.

During last week's STEAM activity, I tried to engage the fifth-graders in discussion of some of the patterns that I could see in their traces. For the most part, I didn't succeed in prompting discussion or analysis of patters. So, this week, I decided to try an intervention. I asked Linda Koch for permission to add a brief setup for the Art Bot segment of the morning.

I reintroduced myself to the group with details that Kim Emerson had not included. I said that I am a scientist and like many scientists, I am interested in patterns. I told them that I saw patterns in kids Art Bot drawings last week and that I'd like them to be on the lookout for interesting patterns today, too. I explained that we might find patterns at different scales: big circles and tiny elements of the traces. I also referred to a comment of one of the kids who had earlier said it would be neat to put a sensor on her brush bot. I said that could think of the pens we use in the art bots as a kind of sensor that shows where the art bot had traveled.

Kate Greeley came to Woodside for a meeting and stopped in the STEAM activity to see what we were doing. She helped me listen to conversations that kids were engaging with each other. We both informally observed that some kids were describing patterns. I think we are making progress.

One striking contrast led to considerable analysis and reflection. Most kids first tested their art bots on the tables where they had constructed them. But one boy soon moved a paper to the carpeted floor. He and others noticed a striking difference in the small scale patterns between the two conditions. The traces from the bot on the floor were much smoother than the traces on the table. Linda Koch encouraged the kids to try different papers to see whether that affected the results.

In a side conversation with me, Linda expressed curiosity about the cause for the difference between the two sites. I explained that we could probably turn that question into an extended research project. I told her that my hypothesis is that the hard table surface and the soft floor carpet influence motion of the marker tips.

June 8, 2016 at 10:49 AM
Comments (0)

That was so cool!

June 8, 2016 at 12:51 PM
Comments (0)

I am working toward being able to map features of interest relative to historic maps and artifacts. I have been able to use tutorials, APIs and reference materials from various sources to get closer to my goal. In this post I show further progress.

GeoJSON.io is a simple open-source mapping tool that I use for taking map-based historical analysis. GeoJSON.io provides a tool to add tile layers to a map. The example they provide is a watercolor layer from Stamen. I used the tool to show that I was able to reproduce the documented response. GeoJSON.io has also begun to expose parts of its services through an API that they describe here: https://github.com/mapbox/geojson.io/blob/6e71c1eb9117f1902003b1eb1be9c63226e5c077/API.md

In this description of the GeoJSON API, they describe a Console API that supports exploration of their services. So, I followed their instructions and was able to easily produce a similar result by a slightly different path.

window.api.map.addLayer(L.tileLayer('http://tile.stamen.com/watercolor/{z}/{x}/{y}.jpg'))

I don't quite see the point of adding a water color layer to a map but the code snipped allows me to test the approach with the Map Warper service provided by the NYPL.

window.api.map.addLayer(L.tileLayer('http://maps.nypl.org/warper/maps/tile/12966/{z}/{x}/{y}.png'))

Figure 1 shows progress toward my goal.

Meta: I looked back at NYPL Map Tile Layers, the previous step, and noticed the similarity of one of the figures in it to Figure 1 in this step. The change may seem slight but it is important to me. The GeoJSON.io mapping tool supports my sandbox analysis of features of mapping that OpenStreetMaps.org discourages for production maps. http://wiki.openstreetmap.org/wiki/Sandbox_for_editing

June 9, 2016 at 10:49 AM
Comments (0)

I'm still on my quest to add CTEco imagery to a GeoJSON.io map in a way that does not require meticulous selection of the source image, tedious georeferencing, and arcane identification of Lat/Lon extents (bbox required by the L.imageOverlay. I read several tutorial sources about Spatial Reference systems including pages from Esri's Javascript API. None of them told me explicitly how I needed to setup the query that will give me the combination of resources that I need.

Finally, after much hair pulling, I tried using the
EPSG:4326 WGS 84 ( http://spatialreference.org/ref/epsg/4326/ ) in the 'Image Spatial Reference:' field. That did the trick:

{
 "href": "http://ctecoapp4.uconn.edu/arcgis/rest/directories/arcgisoutput/Hillshade_ImageServer/_ags_757147.png",
 "width": 400,
 "height": 400,
 "extent": {
  "xmin": -73.744604454980163,
  "ymin": 40.522847588282886,
  "xmax": -71.758680772190189,
  "ymax": 42.50877127107286,
  "spatialReference": {
   "wkid": 4326,
   "latestWkid": 4326
  }
 },
 "scale": 0
}

Figure 1 shows the Snapi project where I am developing this work. Figure 2 shows that I can generate the JSON that describes the image I request so that I am able to pull out the desired latitudes and longitudes to specify the bounding box in a coordinate system that GeoJSON.io requires.

I had extra incentive to persist in the face of these challenges because of recent presentation of the Topsham Historical Society at Topsham Public Library. Bruce Bourque, Maine State Archeologist, talked about the potential for using LiDAR to augment resources that he has found that may help to locate sites of early settlement of the area. https://www.midcoastmaine.com/calendar/event/8433

  • Topsham Historical Society 6:30 PM Topsham Public Library , Topsham
  • We welcome Dr. Bruce J. Bourque, Chief Archaeologist & Curator of Ethnography at the Maine State Museum. Using penetrating radar, Dr. Bourque has located evidence of early European settlements around Merrymeeting Bay (late 17th century and early 18th century), and his talk will focus on these new discoveries. ... Please join us for this interesting program on the area's earliest residents.

Now, I got to see how the parts fit into a system.

June 16, 2016 at 2:54 PM
Comments (0)

I want to include transparency control for my imageOverlays so that you can see current ground-truth. I've tried on several occasions to incorporate opacity conrols and attribution to my overlay but the Javascript syntax (where to include commas, curly brackets ( '{...}' ) has eluded me. Today, I found a worked example of attribution at the Leafletjs site: http://leafletjs.com/index.html

var map = L.map('map').setView([51.505, -0.09], 13);

L.tileLayer('http://{s}.tile.osm.org/{z}/{x}/{y}.png', {
    attribution: '&copy; <a href="http://osm.org/copyright">OpenStreetMap</a> contributors'
}).addTo(map);

So, I remixed their sample code with the working examples of my add an image overlay, and successfully added both attribution and opacity control.

//This code snippet layers an image stored at BiP onto a region of interest with opacity and attribution control.
window.api.map.addLayer(L.imageOverlay('https://buildinprogress.s3.amazonaws.com/image/image_path/27450/uploads_2F0l8yk2trn474m9xg-6e46db607ef8f826cdad800e182dd6f4_2FScreen%2BShot%2B2016-03-26%2Bat%2B10.24.34%2BAM.png?v=1466161366817', [[41.913128157499685, -72.09933131933212], [41.91568895957104, -72.08980411291122]], {attribution: '&copy; <a href="http://ctecoapp4.uconn.edu/arcgis/rest/services/Hillshade/ImageServer">CTEcoHillshade</a> contributors', opacity:0.6}))

Now, I need to learn to parse Esri JSON to capture and transform the image url and extent bounds for hillshade images.

Meta: My Javascript code is not yet formatted because I am still learning how to use Firebug more fluently. As I use Firebug, I can see how useful it is to understanding the big data behind digital mapping. Figure 2 captures a peek into the labyrinth. Digital mapping has been focused on converting big data into user-friendly visualizations long before it was called 'big data.'

Here, I follow up on an idea from earlier in the day. Can we also use the design files as a repository? Will it display the image or only support download? Does not force download since the browser recognizes and can handle the file type. Contrast these two urls:

The Design File link is more user friendly. I wonder if there are other issues to consider. Update: The best way to find out is to try it. Here, I add the Thayer map as a test (Low rest to see whether the approach works.)

June 17, 2016 at 10:31 AM
Comments (1)
For readers who are curious about Firebug: https://en.wikipedia.org/wiki/Firebug_%28software%29
about 1 year ago

In a post about WOW Winter Camp,
http://buildinprogress.media.mit.edu/projects/3335/steps?step=17301
I described the spread of Genius Hour learning approaches to an extended learning opportunity. Here, I give an update after the first day of WOW Summer Camp.

After we got all of the six new riders up and going, one of the girls who had participated in Ms. Dow's Genius Hour experience, came to me and asked if I were busy. She said that they wanted to go trail riding. I agreed and am glad I did because I hadn't appreciated the full meaning behind her request. As we assembled the crew, I noticed first that it was the entire intersection between Genius Hour participants and WOW Summer Camp participants. Five of them rode unicycles but one boy left his unicycle in the gym. As we walked/rode out to the trail, it gradually became clearer that they had planned to extend one of the video interest that had developed during Genius Hour.

The crew took video clips in Figure 1. Then when we finished the trail/video session, they asked if they could use my computer to view their first session. That effort to transfer and display their first takes, generated interest in a number of campers who had not participated in either the filming session or in Genius Hour. So, they are introducing a new group of students to this approach to extending learning opportunities.

2016-6-22: The following day, Ms. Dow, Ms. Koch and I attended the annual MEGAT Unconference. The organizers assigned a session to address opportunities for differentiation. We attended that session and Ms. Emerson invited us to share our work on Genius Hour. The session turned into a Genius Hour presentation about our work on developing Genius Hour. Figure 2 shows Ms. Dow and Ms. Koch sharing stories and reflections about the project.

June 21, 2016 at 6:15 AM
Comments (0)

We are trying to detect patterns. In the images of the school show that a parent shared on social media, we could see arcs of light rather than point sources. That means the wheel rotates during the time the 'shutter' is open. We want to make use of this effect to help jugglers be able to see differences in patterns between proficient and novice performance. We have been testing various cameras to see which will support this work. In this case, we are using a Nikon CoolPix to capture rapid rotation by using a combination of exposure compensation and photographing in a dark room.

June 23, 2016 at 1:07 PM
Comments (0)

When kids see their older siblings learn to ride, they often want to learn, too. Age gaps, family dynamics and several other factors may influence how they approach the challenge. This week, a younger sibling came to the last week of WOW camp a bit tentatively. 

We worked to make sure that she got a good start. She used great precision to get starting angle of pedals correct. But like many new riders she needed a little help to get it right. I showed her the trick of spinning the wheel, using momentum and then setting it down on the floor. Immediately after I finished explaining, she said, "Ah, that's why my sister does that all the time."  It's great to get direct evidence for the informal learning that comes from seeing others learn.

In Figure 1, she is riding with assistance on Thursday morning of her first week. While this is not the fastest, it represents great progress. But I also had to explain to another rider that siblings typically have an advantage when she wanted to pair-ride, too.

July 21, 2016 at 11:31 AM
Comments (1)
What a nice story! Always great to have a role model to learn and get inspiration from.
11 months ago