Wolfram Blog., 13 .
News, views, and ideas from the front lines at Wolfram Research.

 
 
1. The Computational Classroom: Easy Ways to Introduce Computational Thinking into Your Lessons., 13 .[−]

A version of this post was originally published on the Tech-Based Teaching blog as Computational Lesson-Planning: Easy Ways to Introduce Computational Thinking into Your Lessons. Tech-Based Teaching explores the intersections between computational thinking, edtech and learning.

Sometimes a syllabus is set in stone. Youve got to cover X, Y and Z, and no amount of reworking or shifting assignments around can change that. Other factors can play a role too: limited time, limited resources or even a bit of nervousness at trying something new.

But what if youd like to introduce some new ideas into your lessonsideas like digital citizenship or computational thinking? Introducing computational thinking to fields that are not traditionally part of STEM can sometimes be a challenge, so feel free to share this journey with your children’s teachers, friends and colleagues.

The computational classroom

Computational thinking is a mindset that is complemented by technology, not necessarily bound by it. In fact, some concepts can be as simple as adding a reflective assessment to the end of your lessons, allowing students a chance to uncover their thought processes and engage in metacognitive thought.

While computational thinking most often relates to coding (unsurprising given its connection to computer science), its really a way of looking at problems. This means that computational thinking can be introduced into all sorts of classroomsnot just in STEM classes, but in art and music classes, and even in physical education.

Why Computational Thinking Gets an A+

Like digital citizenship, computational thinking is a useful skill for students to master before they enter the real world. Practicing computational thought enables them to pick up new technologies, utilizing them for work and play. Computational thinking is a transferable skill, and it can act as a lens through which students can view problems outside the classroom.

In a way, practicing computational thinking through such tasks as deconstruction and experimentation can help to abolish the fear of failure that stymies a growth mindset. After all, if failure is a necessary component of success, whats there to fear? Students who feel comfortable with limited information and unknown variables are more prepared to tackle new ideas.

Given that computational thinking can provide real value for your students, how can you add it to your lessons?

Pattern Recognition: Beyond Stripes and Solids

One component of computational thinking is pattern recognition. Pattern recognition can help to determine the build of a system as well as find inefficiencies, perfect for generating an engineering mindset. It can also help to determine the variables of a given problem.

One way to practice pattern recognition is to have students look at data and see where they can find repeating data points. The typical thematic analysis assignment found in many English language arts (ELA) classes relates well to the concept of pattern recognition. When looking for symbols in a text, students are searching for repeating patterns.

Going beyond symbols, some digital humanists use computers to analyze punctuation or the overall sentiment of a book or corpus that the human eye might not catch. For example, this teacher has his students perform distant reading with Wolfram Mathematica, leading to insights on everything from speeches to rap lyrics:

textRaw=Import
&#10005

textRaw=Import["http://www.gutenberg.org/cache/epub/608/pg608.txt"];

StringPosition
&#10005

StringPosition[textRaw,{ "A SPEECH FOR","End of the Project"}]

areo=StringTake
&#10005

areo=StringTake[textRaw,{635,102383}];

WordCloud
&#10005

Row[WordCloud/@{textRaw,areo}]

Comparing phonetic distribution between two rap artists

If doing such a deep dive isnt possible, you can still have students be more thoughtful about their theses. Perhaps you can challenge them to look for unusual patterns in their texts. What if every student drew a noun from a hat prior to reading a novel, and they were responsible for noting when that noun appeared?

Take the noun food, for example. Primed to notice every incident in which a character eats a meal, a student could begin to see how food is used in a particular novelas an abstracted symbol, or an incitement of plot or even a tool for characterization.

Build Em Up and Break Em Down: Deconstruction and Reconstruction

Just as pattern recognition can be a helpful tool in discovering the possible inner workings of a system, deconstruction and reconstruction allow students to demolish and rebuild the systems they discover. Systems can be found in set formulas, interconnected biological processes or even historical structures.

Changing built-in systems and tinkering with variables is the basis of coming up with new algorithms for solving problems. Understanding systems is also inherently valuable, even in the humanitiesgrammar and syntax underpin language, for example, while soft skills like communication are wrapped up in social mores.

Going back to the ELA classroom, perhaps looking at a broad overview of a certain genre could help students see the commonalities of that genres books. For a fun question, you could ask, What makes a graphic novel? This could be a good way to introduce the idea of critical lenses.

You could also use charts or graphs to deconstruct a genre into its core components. If one component changes, is the book still a part of that genre? You could connect this idea to generators and bots, showing how traits and qualities can be remixed into new forms. (YA readers may enjoy this John Green plot generator!)

Students could add new members to the systems they uncover. For example, they could pitch a new graphic novel, or brainstorm alien biology, or try their hand at world-building for a fictional country. Deconstructing a math problem using Wolfram|Alpha could lead to insights on the whys and hows of a particular formula.

Wolfram|Alpha Pro Step-by-Step Solutions

Wolfram|Alpha Pro Step-by-Step Solutions for Calculus

Failure Means Youre Learning

Part of computational thinking relies on students lack of fear in getting an answer wrong, particularly during the early exploratory stages of a project. Fear of failure is common, and it persists well into adulthood. These feelings can stand in the way of making real progress, especially in a classroom where peer pressure is paramount.

To a certain extent, the only way to get a classroom to feel like a true learning environment is to make every student feel that failure is okay. Doing so relies on understanding your students as people, as dealing with personality clashes and students individual backgrounds is a huge part of classroom management. Still, one way to banish a fear of failure is to consider ways of incorporating small wins into lessons.

In some fields such as writing or art, professionals must fight through rejection on a near-daily basis. To counter that feeling of failure, some people have created games in order to get past their initial knee-jerk reaction of despair. Some writers and artists hold 100 rejections challenges, aiming to collect rejection letters. Others engage in rejection therapy, in which failure is the end goal, not an unfortunate game over end state.

Why not gamify failure in the classroom as well? One example in higher ed comes from an anecdote found in the book Art and Fear. In it, a professor divided a pottery class into two groups. While the first group had to submit one pot for a final grade, the second group had to submit a specific poundage of pots. In the end, members of the second group had the highest grades, as they were unbridled by the stress of perfection. They were able to fail over and over.

To alleviate this stress for your students, you could try emphasizing process over perfection. Rather than having students submit a long-form story as a capstone assignment, perhaps they could be graded on the amount of flash fiction they produce. The very process of iterating story after story imparts useful writing skills. In fact, there might be some unconscious pattern recognition as they go along, wherein the students notice their preferred tropes or storytelling techniques.

You could also emphasize reflection in STEM classes. More and more, educators are stressing the value of writing in math class. For a math assignment, students could notate their problem-solving processes through a program like the notebook-based Mathematica. By reflecting on their decisions through inline comments, students will use metacognition, a powerful learning tool.

Experiment, Experiment, Experiment

Similar to reconstruction, experimentation results in new ideas being extrapolated from recognized patterns. Students can create experiments to figure out what makes systems tickand without a fear of failure, tinkering becomes play.

Obviously in STEM fields, experimentation is not only expected, but celebrated. But even in the humanities, students can intuit how cause and effect works. In music, changing between a minor key and a major key can shift the perceived mood of a piece, at least to Western ears. Color swaps in a piece of art can have emotive effects.

One idea for experimental writing assignments could be to have students create choose-your-own-adventure stories with branching paths. This type of writing emphasizes the if this, then that thought process thats so vital to creating algorithms or step-by-step instructions. There are several game developmentbased tools available for creating these stories, but even pen and paper can work.

Exploring Further Resources

Some of these ideas might seem a bit simple. And thats because they are! Computational thinking doesnt have to be complicated to be useful.

Even with the rote standby of analyzing a text for themes and characters, you can cement the idea of recognizing patterns or breaking down a system. As you become more comfortable with computational thinking, and if the IT resources are available, you can then begin to introduce technology into your lessons. For example, using the Wolfram Language to dig deep into problems using code could vastly aid in analysis and experimentation.

As more people recognize the value of computational thinking, educators are publishing their own lesson plans and ideas online. This online book, for example, offers a treasure-trove of ideas for incorporating computational thinking into lessons by subject. Examples range from robotics to data analysis and more.

Computational Thinking Initiative

Another useful resource is Computational Thinking Initiatives (CTI), a nonprofit group devoted to sharing tools and resources for educators looking to introduce computational thinking into their classrooms. They share pre-developed lesson plans built using the Wolfram Language, offer an AI League and coding challenges, and help engage in community efforts to spread the word about computational thinking. If you want more personalized advice, you can reach out to them with questions.

If youre interested in exploring the Wolfram Language, you can check out An Elementary Introduction to the Wolfram Language. Otherwise, take a look around this blog under the Education tag to see how other educators are using Wolfram Research tools in their lessons.

 (0)

2. Deep Learning and Computer Vision: Converting Models for the Wolfram Neural Net Repository., 07 .[−]

Julian Francis, a longtime user of the Wolfram Language, contacted us with a potential submission for the Wolfram Neural Net Repository. The Wolfram Neural Net Repository consists of models that researchers at Wolfram have either trained in house or converted from the original code source, curated, thoroughly tested and finally have rendered the output in a very rich computable knowledge format. Julian was our very first user to go through the process of converting and testing the nets.

We thought it would be interesting to interview him on the entire process of converting the models for the repository so that he could share his experiences and future plans to inspire others.

How did you become interested in computer vision and deep learning?

As a child, I was given a ZX81 (an early British home computer). Inspired by sci-fi television programs, I became fascinated by the idea of endowing the ZX81 with artificial intelligence. This was a somewhat ambitious goal for a computer with 1 KB of RAM! By the time I was at university, I felt that general AI was too hard and ill-defined to make good progress on, so I turned my attention to computer vision. I took the view that by studying computer vision, a field with a more clearly defined objective, we might learn some principles along the way that would be relevant to artificial intelligence. At that time, I was interested in what would now be called deformable part models.

After university I was busy developing my career in IT, and my interest in AI and computer vision waned a little until around 2006, when I stumbled on a book by David MacKay on inference theory and pattern recognition. The book dealt extensively with probabilistic graphical models, which I thought might have strong applications in computer vision (particularly placing deformable part models on a more rigorous mathematical basis). However, in practice I found it was still difficult to build good models, and defining probability distributions over pixels seemed exceptionally challenging. I did keep up my interest in the field, but around 2015 I became aware that major progress in this area was being made by deep learning models (the modern terminology for describing neural networks, with a particular emphasis on having many layers in the network), so I was intrigued by this new approach. In 2016, Id written a small deep learning library in Mathematica (now retired) to validate those ideas. It would be considered relatively simple by modern standards, but it was good enough to train models such as MNIST, CIFAR-10, basic face detection, etc.

How did you find out about the Wolfram Neural Net Repository?

I first came across the repository in a blog by Stephen Wolfram earlier this year. I am a regular reader of his blogs, and find them helpful for keeping up with the latest developments and understanding how they fit in with the overall framework of the Wolfram Language.

In your opinion, how does the Wolfram Neural Net Repository compare with other model libraries?

The Wolfram Neural Net Repository has a wide range of high-quality models available covering topics such as speech recognition, language modeling and computer vision. The computer vision models (my particular interest) are extensive and include classification, object detection, keypoint detection, mask detection and style transfer models.

Wolfram Neural Net Repository

I find the Wolfram Neural Net Repository to be very well organized, and its straightforward to find relevant models. The models are very user friendly; a model can be loaded in a single line of code. The documentation is also very helpful with straightforward examples showing you how to use the models. From the time you identify a model in the relevant repository, you can be up, running and using that model against your own data/images within a matter of minutes.

Other neural net frameworks, in contrast to the Wolfram Neural Net Repository, can be time-consuming to install and set up. In many frameworks, the architecture is separate from the trained parameters of the model, so you have to manually install each of them and then configure them to work together. The files are not necessarily directly usable, but may require installed tools to unpack and decompress them. Example code can also come with its own set of complex dependencies, all of which will need to be downloaded, installed and configured. Additionally, the deep learning framework itself may not be available on your platform in a convenient formyou may be expected to download, compile and build it yourself. And that process itself can require its own toolchain, which will need to be installed. These processes are not always well documented, and there are many things that can go wrong, requiring a trawl around internet forums to see how other people have resolved these problems. While my experience is that these things can be done, it requires considerable systems knowledge and is time-consuming to resolve.

From where did you get the idea of converting models?

Id read several research papers on arXiv and other academic websites. My experience often was that the papers could be difficult to follow, details of the algorithms were missing and it was hard to successfully implement them from scratch. I would search GitHub for reference implementations with source code. There are a number of deep learning frameworks out there, and it was becoming clear that several people were translating models from one framework to another. Additionally, I had converted a face-detection model from a deep learning framework I had developed in Mathematica in 2016 to the Mathematica neural network framework in 2017, so I had some experience in doing this.

What’s your take on transfer learning, and why it should be done?

A difficulty in deep learning is the immense amount of computation required in order to train up models. Transfer learning is the idea of using one trained network in order to initialize a new neural network for a different task, where some of the knowledge needed for the original task will be helpful for this new task. The idea is that this should at least initialize the network in a better starting point, as compared with a completely random initialization. This has proved crucial to enabling researchers to experiment with different architectures in a reasonable time frame, and to enable the field to make good progress.

For example, object detectors are typically organized in two stages. The first stage (the base network) is concerned with transforming the raw pixels into a more abstract representation. Then a second stage is concerned with converting that into representations defining which objects are present in the image and where they are. This enables researchers to break down the question of what is a good neural network for object detection into two separate questions: what is a good base network for high-level neural activity descriptions of images, and what is a good architecture for converting these to a semantic output representation, e.g. a list of bounding boxes?

Researchers would typically not attempt to train the whole network from a random initialization, but would pick a standard base network and use the weights from that trained model to initialize their new model. This has two advantages: it can save training time from weeks to days or even hours. Secondly, the datasets for image classification are much larger than the datasets we currently have for object detection, so our base network has benefited from the knowledge gained from being trained on millions of images, whereas our datasets for object detection might have only tens of thousands of training examples available. This approach is a good example of transfer learning.

What model(s) did you convert, and what broader tasks do they achieve?

I have converted the SSD-VGG-300 Pascal VOC, the SSD-VGG-512 Pascal VOC and SSD-VGG-512 COCO models. The first two models detect objects from the Pascal VOC dataset, which contains twenty objects (such as cars, horses, people, etc.). There is a trade off on the first two models between speed and accuracythe second of the models is slower but more accurate. The third model can detect objects from the Microsoft COCO dataset, which can detect eighty different types of objects (including the Pascal VOC objects).

SSD-VGG-300 Pascal VOC
&#10005

NetModel["SSD-VGG-300 Trained on PASCAL VOC Data"]

SSD-VGG-512 Pascal VOC
&#10005

NetModel["SSD-VGG-512 Trained on PASCAL VOC2007, PASCAL VOC2012 and MS-COCO Data"]

The third model can detect objects from the Microsoft COCO dataset, which can detect eighty different types of objects (including the Pascal VOC objects).

SSD-VGG-512 COCO
&#10005

NetModel["SSD-VGG-512 Trained on MS-COCO Data"]

These detectors are designed to detect which objects are present in an image, and where they are. My main objective was to understand in detail how these models work, and to make these available to the Wolfram community in an easy and accessible form. They are a Mathematica implementation of a family of models referenced by SSD: Single Shot MultiBox Detector by Wei Liu et al., a widely referenced paper in the field.

How do you think one can use such a model to create custom applications?

Id envisage these models being used as the object-detection component in a larger system. You could use the model to do a content-based image search in a photo collection, for example. Or it could be used as a component in an object-tracking system. I could imagine it having applications in intruder detection or traffic management. Object detection is a very new technology, and I am sure there can be many applications that havent even been considered yet.

How does this model compare with other models for object detection?

Currently, popular neural networkbased object detectors can be grouped into what are considered two-stage detectors and the class of single-stage detectors.

The two-stage detectors have two separate networks. The first is an object proposal network, whose task is to determine the location of possible objects in the image. It is not concerned with what type of object it is, just with drawing a bounding box around that object. They can produce thousands of bounding boxes on one image. Each of those region proposals is then fed into a second neural network that tries to determine if it is an actual object and, if so, what type of object it is. R-CNN and Fast/Faster R-CNN and the Region Proposal networks fall into this category.

The single-stage detectors work by passing the image through a single neural network whose output directly contains information on which objects are in the image and where they are. The YOLO family and the Single Shot Detectors (SSD) family fall into this category.

Generally, the two-stage detectors have achieved greater accuracy. However, the single-stage detectors are much faster. The models that I converted are all based on the Single Shot Detector family with a VGG-type base network. Their closest relatives are the YOLO detectors. There is a YOLO version 2 model in the Wolfram Neural Net Repository. So by comparison, the most accurate model I converted is slower but more accurate than this model.

Why would you want to use the Wolfram Language for creating neural network applications?

I have been a Mathematica user since the summer of 1991, so I have a long familiarity with the language. I find that I can write code that expresses my thoughts at exactly the right level of abstraction. I appreciate the multiparadigm approach whereby you can decide for yourself what works best for your particular problem. By using the Wolfram Language, you gain access to all the functionalities available in the extensive range of packages. I find the code I write in the Wolfram Language is typically shorter and clearer than what I write in other languages.

What would you say to people who are either new to the Wolfram Language or deep learning to get them started?

For people new to deep learning, I recommend a mixture of reading blogs and following a video lecturebased course. Medium hosts a number of blogs that you can search for deep learning topics. Google Plus has a deep learning group that can be a good source for keeping up to date on news in the field. Id also recommend Andrew Ngs very popular course on machine learning at Coursera. In 2015, Nando de Freitas gave a course at Oxford University, which I found to be thorough but also very accessible. Andrej Karpathys CS231n Winter 2016 course is also very good for beginners. The last two courses can be found on YouTube. After following any of these two courses, you should have a reasonable overview of the field. They are not overly mathematical, but a basic knowledge of linear algebra is assumed, and some understanding of the concept of partial differentiation is helpful.

For people new to the Wolfram Language, and especially if you come from a procedural/object-oriented programming background (e.g. C/C++ or Java), I would encourage you to familiarize yourself with concepts such as vectorization (acting on many elements simultaneously), which is usually both more elegant and much faster. I would suggest getting a good understanding of the core language, and then aiming to get at least an overview of the many different packages available. The documentation pages are an excellent way to go about this. Mathematica Stack Exchange can also be a good source of support.

It is a very exciting time to be involved in computer vision, and converting models is a great way to understand how they work in detail. I am working on translating a model for an extremely fast object detector, and I have a number of projects that Id like to do in the future, including face recognition and object detectors that can recognize a wide range of classes of objects. Watch this space!

: video/mp4

 (0)

3. Interning at Wolfram: My Regeneration as a Theoretical Scientist., 30 .[−]

How does it feel to be an intern at Wolfram?

Most undergraduate college students chase opportunities for internships in New York, Miami, Seattle and particularly San Francisco at very young but large high-tech companies like Uber, Pinterest, Quora, Expedia and similar internet companies. These companies offer the best salaries, perks, bosses, coworkers, catered lunches and other luxurious amenities available in such large cities. You would seldom hear about any of these people pursuing opportunities in small, lesser-known towns like Ames, Iowa, or Laramie, Wyomingand Champaign, Illinois, where Wolfram Research is based, is one of those smaller towns.

Many students want to go into computer science, as its such a rapidly developing field. They especially want to work in those companies on the West Coast. If youre in a different field, like natural science, you might think theres nothing beyond on-campus research for work experience. At Wolfram Research, though, there is.

Working at Wolfram

Wolfram Research is a tight-knit company with a relatively small office where everyone can easily get to know each other. Fortunately, I have been in good company for the nearly two years I have worked here. Most of my full-time colleagues are highly qualified, with prestigious masters or doctoral degrees in science or engineering. I have picked up a lot from their diverse knowledge related to the subjects I intend to pursue. Like most of them, I am a highly theoretical physicist, an applied mathematician and somewhat of a computer scientist myself, and this is what has compelled me to keep interning here instead of at other companies such as Intel or Boeing. Access to our company library, with its vast collection of books on modern computer science, applied science, mathematics, statistics, intelligent systems and various other subjects, allows everyone to learn about activities happening at much larger companies. This gives an incentive to keep picking up new skills and diversifying ones industrial skill set for more outside projects. Its thanks to the good influence of the library and the coworkers on me that I have stayed at Wolfram much longer than most other interns here in order to develop my skill set.

A Little about Myself

My name is Parik Kapadia and I have been an intern in the Algorithms R&D department at Wolfram Research for nearly two years now. I am also a student at the University of Illinois at Urbana-Champaign, majoring in electrical and computer engineering with a minor in statistics. Im also about to complete a Certificate in Data Science offered by the university.

My Internship Projects

Throughout the time Ive been working at Wolfram, my projects have allowed me to return to subjects I hadnt studied for two years, since they related to high-school or first-year-college courses. Ive also worked on projects far beyond what an entry-level employee would take on. Working on such projects as an intern has given me more experience than I would receive at other companies.

Benchmark for Calculus

My initial training consisted of solving around 1,200 examples for intermediate and advanced calculus from a well-known textbook, Calculus by James Stewart, used by millions of students all over the world. It had already been nearly two years since I completed the three-course sequence of calculus required for engineering majors like myself, and this put me in a prime position to carry out this project. It turned out that the chapters covering Calculus I and II (or AP Calculus AB and BC) had already been completed by previous interns. This meant that I only had to complete the last six chapters of the course, which consisted of Calculus III. This final segment covers topics in multivariable calculus, such as vectors themselves, vector integration and differentiation, partial derivatives, multiple integrals, and vector calculus itself. In turn, these topics are used in physics and engineering, most particularly electromagnetismone of my favorite topics in theoretical electrical engineering.

The result of this benchmarking project was a huge collection of around 6,000 problems completely solved using the Wolfram Language. The success of this project eventually led to a course in calculus that was released in September.

Since then, I have been coding an endless string of problems in advanced and applied mathematics from a collection of mathematics textbooks. Although I am an engineering major, I have seldom had the opportunity to study these textbooks for any major projects, although research at the graduate degree level for engineering may require the application of these as research becomes more interdisciplinary. Having coded more than 10,000 of these examples since the completion of the calculus book project, I have become a born-again applied mathematician and theoretical physicist, and now feel the need to return to those roots that I previously nourished when I was a high-school student. I am now hoping to potentially work full time at Wolfram Research in the near future.

Mathematics Stack Exchange

My first formal, full-fledged project after the end of the calculus project was to collect a large number of examples regarding the use of the RSolveValue function to determine the limiting behavior of recursive sequences. In order to do this, I looked at all the relevant examples on Mathematics Stack Exchange and compiled a notebook showing how well they worked with RSolveValue. This was a very satisfying project, and you can imagine my thrill when Stephen Wolfram used a similar example to one I collected in his blog post announcing Version 11.2.

Example 52

Multivariate Limits

Another interesting project assigned to me was to check more than 1,200 examples for multivariate limits, which was a new feature in Version 11.2 of Mathematica. This required carefully going through all of the examples manually and doing sanity checks to make sure that the results and plots agreed, and to tweak the plots so that they looked elegant. Here, my knowledge of multivariable calculus came to the rescue, and I helped to select 1,000 examples that were used for the blog post Limits without Limits in Version 11.2. As you can see, people like myself work in the background at Wolfram to make sure that all publications and products are of the highest quality, and we take pride in maintaining the highest standards.

Asymptotics

As a final project, I would like to mention my role in setting up large benchmarks for the asymptotics features in Version 11.3. I did this by collecting examples of differential equations and integrals from around 15 books, starting from undergraduate mathematics and engineering texts to advanced graduate-level discussions of asymptotic expansions. The challenge here was to make sure that the results from the new asymptotics functions agreed with the intuition and some numerical or symbolic comparisons with built-in functions such as DSolve, Integrate, NIntegrate and Series. The complete benchmark ran into around 4,000 examples and boosted the developers confidence in this exciting new set of functions, and some of the examples were used in a blog post after Version 11.3 was released.

Future Plans

A total of 21 months have passed since I began my internship at Wolfram Research, and I am now looking forward to future plans. This internship has facilitated the process of my rebirth into a budding theoretical physicist, applied mathematician and computer scientist, and I shall build my career out of my growth into this level of achievement and how I have established myself while interning at Wolfram Research.

There are endless possibilities for me to hone everything I did here, of courseI gained experience in research and development as well. Government jobs, software and internet, data analytics, health care, bioinformatics, computer hardware and a diverse set of industries will need the expertise I have gained. The projects I have done as part of my internship have given me opportunities for moving on to quality assurance tasks, including debugging and testing Mathematica. Interning at Wolfram has given me the opportunity to make use of my skills and education, learn more about the directions Id like my career to move in and build mutually beneficial relationships. If youre a college student looking for a similar experience to my own, apply for an internship at Wolfram.

Apply for internships
Find an internship in your field at Wolfram. Available opportunities are continually updated.
 (2)

4. Computation + Literature in High School: Doctoral-Level Digital Humanities., 21 .[−]

Thanks to the Wolfram Language, English teacher Peter Nilsson is empowering his students with computational methods in literature, history, geography and a range of other non-STEM fields. Working with a group of other teachers at Deerfield Academy, he developed Distant Reading: an innovative course for introducing high-level digital humanities concepts to high-school students. Throughout the course, students learn in-demand coding skills and data science techniques while also finding creative ways to apply computational thinking to real-world topics that interest them.

In this video, Nilsson describes how the built-in knowledge, broad subject coverage and intuitive coding workflow of the Wolfram Language were crucial to the success of his course:

Modernizing the Humanities with Computation

Nilssons ultimate goal with the course is to encourage computational exploration in his students by showing them applications relevant to their lives and interests. He notes that professionals in the humanities have increasingly turned toward computational methods for their research, but that many students entering the field are lacking in the coding skills and the conceptual understanding to get started. With the Wolfram Language, he is able to expose students to both in a way they find intuitive and easy to follow.

To introduce fundamental concepts, he shows students a pre-built Wolfram Notebook exploration of John Miltons Areopagitica featuring a range of text analysis functions from the Wolfram Language. First he retrieves the full text from Project Gutenberg using Import:

textRaw=Import
&#10005

textRaw=Import["http://www.gutenberg.org/cache/epub/608/pg608.txt"];

He then demonstrates basic strategies for cleaning the text, using StringPosition and StringTake to find and eliminate anything that isnt part of the actual work (i.e. supplementary content before and after the text):

StringPosition
&#10005

StringPosition[textRaw,{ "A SPEECH FOR","End of the Project"}]

areo=StringTake
&#10005

areo=StringTake[textRaw,{635,102383}];

To quickly show the difference, he makes a WordCloud of the most common words before and after the cleanup process:

Row
&#10005

Row[WordCloud/@{textRaw,areo}]

From here, Nilsson demonstrates some common text analyses and visualizations used in the digital humanities, such as making a Histogram of where the word books occurs throughout the piece:

Histogram
&#10005

Histogram[StringPosition[areo,"books"][[All,1]],{5000}]

Or computing the average number of words per sentence with WordCount and TextSentences:

N
&#10005

N[WordCount[areo]/Length[WordCount[TextSentences[areo]]]]

Or finding how many unique words are used in the piece with TextWords:

Length
&#10005

Length[DeleteDuplicates[TextWords[areo]]]

He also discusses additional exploration outside the text itself, such as using WordFrequencyData to find the historical frequency of words (or n-grams) in typical published English text:

DateListPlot
&#10005

DateListPlot[WordFrequencyData[{"war","peace"},"TimeSeries"]]

Building this example in a Wolfram Notebook allows Nilsson to mix live code, text, images and results in a highly structured document. And after presenting to the class, he can pass his notebook along to students to try themselves. Even students with no programming experience learn the Wolfram Language quickly, starting their own explorations after just a few days. Throughout the course, Nilsson encourages students to apply the concepts in different ways and try additional methods. The challenge, he says, is getting them to think, Oh, I can count this.

Doctoral-Level Research in a High-School Course

Once students are acquainted with the language and the methods, they start formulating research ideas. Nilsson says he is consistently impressed with the ingenuity of their projects, which span a broad range of humanities topics and datasets. For example, here is an analysis comparing phonetic distribution (phoneme counts) between two rap artists works:

Analysis comparing phonetic distribution

Students take advantage of the range of visualization types in the Wolfram Language to discover patterns they wouldnt otherwise have noticed, such as this comparison of social networks in the Bible (using Graph plots):

Comparison of social networks in the Bible

Nilsson points out how much easier it is for students to do these high-level analyses in the digital age. What took monks and scholars months and years to accumulate, we can now do in five minutes, he says. He cites a classic analysis that has been recreated in his class, tracking geographic references in War and Peace with a GeoSmoothHistogram:

loc=Interpreter
&#10005

loc=Interpreter["Country"]/@TextCases[Rest@StringSplit[ResourceData["War and Peace"],"BOOK"],"Country"];

ListAnimate
&#10005

ListAnimate[GeoSmoothHistogram[#,GeoRange->{{-40, 80}, {-20, 120}}]&/@loc]

War and Peace

When sharing his activities with colleagues in higher education, he says many have been impressed with the depth hes able to achieve. Some have compared his students projects to doctoral-level workand thats in a one-semester high-school course. But, he says, You dont have to be a doctoral student to do these really interesting analyses. You just have to know how to ask a good question.

Reflecting on and Improving Student Writing

Nilsson also has his students analyze their own writing, measuring and charting key factors over timefrom simple concepts like word length and vocabulary size to more advanced properties like sentence complexity. He sees it as an opportunity for them to examine the progression of their writing, empowering them to improve and adapt over time.

Many of these exercises go beyond the realm of simple text analysis, borrowing concepts from fields like network science and matrix algebra. Fortunately, the Wolfram Language makes it easy to represent textual data in different ways. For instance, TextStructure generates structural forms based on the grammar of a natural language text excerpt. Using the "ConstituentGraph" option gives a graph of the phrase structure in each sentence:

cg=Flatten
&#10005

cg=Flatten[TextStructure[#,"ConstituentGraphs"]&/@
TextSentences[WikipediaData["computer","SummaryPlaintext"]]];

RandomChoice
&#10005

RandomChoice[cg]

AdjacencyMatrix gives a matrix representation of connectivity within the graph for easier visual inspection and computation:

MatrixPlot@AdjacencyMatrix
&#10005

MatrixPlot@AdjacencyMatrix[%]

Closeness centrality is a measure of how closely connected a node is to all others in a network. Since each constituent graph represents a network of related words, sentences with a low average closeness centrality can be thought of as simpler. Applying ClosenessCentrality (and Mean) to each graph gives a base measure of how complex each sentence is:

ListPlot
&#10005

ListPlot[Mean[ClosenessCentrality[#]]&/@cg,Filling->Axis,PlotRange->{0,.3}]

Using these and other analytical techniques, students produce in-depth research reports based on their findings. Here is a snapshot of one paper from a student who used these strategies to examine sentence complexity in his own writing:

Using Closeness Centrality

Besides giving students the opportunity to analyze their high-school writing, Nilsson says this exercise also gives upcoming graduates a solid foundation for research analytics that will be useful in their college careers.

The Right Tool for the Job

Overall, the Wolfram Language has provided Nilsson with the perfect system for research and education in the digital humanities. Since adopting it into his curriculum, he has been able to make real improvements in student understanding and outcomes that he couldnt have achieved otherwise. He notes that, when attempting similar exploration with Excel, MATLAB, R and other systems, none provided the unique combination of power, usability and built-in knowledge of the Wolfram Language. By wrapping everything into one coherent system, he says, the Wolfram Language gives him a really potent tool for doing all kinds of analyses that are much more difficult in any other context.


More Information


Get Started

 (3)

5. As of Today, the Fundamental Constants of Physics (c, h, e, k, NA) Are Finally Constant!., 16 .[−]
This morning, representatives of more than 100 countries agreed on a new definition of the base units for all weights and measures. Heres a picture of the event that I took this morning at the Palais des Congr?s in Versailles (down the street from the Ch?teau): An important vote for the future weights and measures used in science, technology, commerce and even daily life happened here today. This mornings agreement is the culmination of at least 230 years of wishing and labor by some of the worlds most famous scientists. The preface to the story entails Galileo and Kepler. Chapter one involves Laplace, Legendre and many other late-18th-century French scientists. Chapter two includes Arago and Gauss. Some of the main figures of chapter three (which I would call The Rise of the Constants) are Maxwell and Planck. And the final chapter (Reign of the Constants) begins today and builds on the work of contemporary Nobel laureates like Klaus von Klitzing, Bill Phillips and Brian Josephson. I had the good fortune to witness todays historic event in person. In todays session of the 26th meeting of the General Conference on Weights and Measures was a vote on Draft Resolution A that elevates the definitions of the units through fundamental constants. The vote passed, and so the draft resolution has been elevated to an international binding agreement. While the vote was the culmination of the day, this morning we heard four interesting talks (SI stands for Syst me international d unit s, or the International System of Units): The Quantum Hall Effect and the Revised SI, by Klaus von Klitzing (who is my former postdoc advisor, by the way) The Role of the Planck Constant in Physics, by Jean-Philippe Uzan Optical Atomic ClocksOpening New Perspectives on the Quantum World, by Jun Ye Measuring with Fundamental Constants: How the Revised SI Will Work, by Bill Phillips It was a very interesting morning. Here are some pictures from the event: Yes, these are really tattoos of the new value of the Planck constant. So why do I write about this? There are a few reasons why I care about fundamental constants, units and the new SI. Although I am deeply involved with units and fundamental constants in connection with their use in Wolfram|Alpha and the Wolfram Language, I was wearing a media badge today because I have been the science adviserand sometimes the best boy gripfor the forthcoming documentary film The State of the Unit. Units appear in any real-world measurement, and fundamental constants are of crucial importance for the laws of physics. Our Wolfram units team has been collecting data and implementing code over the last decade to help the unit implementation in the Wolfram Language become the worlds most complete and comprehensive computational system of units and physical quantities. And the exact values of the fundamental constants will be of relevance here. I have been acting as the scientific adviser for The State of the Unit, which is directed by my partner Amy Young. The documentary covers the story of the kilogram from French Revolutionary times to literally today (November 16, 2018). Together we have visited many scientific institutes, labs, libraries and museums to talk with scientists, historians and curators about the contributions of giants of science such as Kepler, Maxwell, Laplace, Lalande, Planck, de Broglie and Delambre. I have had the fortune to have held in my (gloved) hands the original platinum artifacts and handwritten papers of the heroes of science of the last few hundred years. Lastly, this blog is a natural continuation of my blog from two years ago, An Exact Value for the Planck Constant: Why Reaching It Took 100 Years. This blog discusses in more detail the efforts related to the future definition of the kilogram through the Planck constant. A lot could be written about the high-precision experiments that made the 2019 SI possible. The hardest (and most expensive) part was the determination of the Planck constant. It involved half a dozen so-called Kibble balances that employ two macroscopic quantum effects (the Josephson and quantum Hall effects) and the famous roundest objects in the world: silicon shapes of unprecedented purity that are nearly perfect spheres. The State of the Unit will show details of these experiments and interviews with the researchers. Before discussing in some more detail the event today and what this redefinition of our units means, let me briefly recall the beginnings of the story. Something very important for modern science, technology and more happened on June 22, 1799. At the time, this day in Paris was called 4 Messidor an 7 according to the French Revolutionary calendar. A nine-year journey led by the top mathematicians, physicists, astronomers, chemists and philosophers of France (including Laplace, Legendre, Condorcet, Berthollet, Lavoisier, Ha?y, de Borda, Fourcroy, Monge, Prony, Coulomb, Delambre, M?chain) came to a natural end. It was carried out in the middle of the French Revolution; some of the main figures of the story lost their lives in it. And in the end, the metric system was born. The journey started seriously when, on April 17, 1790, Charles Maurice de Talleyrand-P?rigord (Bishop of Autun) presented a plan to the French National Assembly to build a new system of measures based on nature (the Earth) using the decimal system, which was not generally used at this time. At the end of April 1790, the main daily French newspaper Gazette Nationale ou le Moniteur Universel devoted a large article to Talleyrands presentation. The different weights and measures throughout France had become a serious economic obstacle for trade and a means for the aristocracy to exploit peasants by silently changing the measures that were under their control. Not surprisingly, measuring land matters a lot. And so, the Department of Agriculture and Trade was the first to join (in 1790) Talleyrands call for new standardized measures. A few months later in August 1790, the project of building a new system of measures became law. France at this time was still under the reign of Louis XVI. To make physical realizations of the new measures, the group employed Louis XVIs goldsmith, Marc-Etienne Janety, to prepare the purest platinum possible at the time. And to determine the absolute size of the new standards, the length of the meridian was measured with unprecedented precision through a net of triangles between Dunkirk and Barcelona. The so-called Paris meridian goes right through Paris, and actually through the main room (through the middle of the largest window) of the Paris Observatory that was built in 1671. And to disseminate the new measures, the public had to be convinced of and educated about the advantages of using base 10 for calculations and trade. The National Convention discussed this topic and requested special research about the use of the decimal system (the dissertation shown is from 1793). The creation of the new measures was not a secret project of scientists. Many steps were publicly announced and discussed, including through posters displayed throughout Paris and the rest of France (the poster shown is from 1793). The best scientists of the time were employed either part time or full time in the making of the new metric system (see Champagnes The Role of Five Eighteenth-Century French Mathematicians in the Development of the Metric System and Gillispies Science and Polity in France for more detailed accounts). Adrien-Marie Legendre, today better known through Legendre polynomials and the Legendre transform, spent a large amount of time in the Temporary Bureau of Weights and Measures. Here is a letter signed by him on the official letterhead of the bureau: Ren? Just Ha?y, a famous mineralogist, was employed by the government to write a textbook about the new length, area, volume and mass units. His Instruction abr g e sur les mesures d duites de la grandeur de la terre: uniformes pour toute la R publique: et sur les calculs relatifs leur division d cimale (Abridged Instruction on Measurements Derived from the Size of the Earth: Uniforms for the Whole Republic: and Calculations of their Decimal Division) was first published in 1793 and became, in its 150-page abridged version, a bestseller that was many times republished throughout France. After nearly 10 years, these efforts culminated in a rectangular platinum bar 1 meter in length, and a platinum cylinder that was 39 millimeters in width and height with a weight of 1 kilogram. These two pieces would become the definitive standards for France and were built by the best instrument makers of the time, ?tienne Lenoir and Nicolas Fortin. The two platinum objects were the first and defining realization of what we today call the metric system. A few copies of the platinum meter and kilogram cylinder were made; all have since remained in the possession of the French government. Cities, municipalities and private persons could buy brass copies of the new standards. Here is one brass meter from Lenoir. (The script text under METRE reads Egal a la dixmillionieme partie du quart du M ridien terrestre, which translates to Equal to the ten-millionth part of the quarter of the Earth meridian.) While the platinum kilogram was a cylinder, the first brass weights for the public were parallelepipeds, also made from brass. The determination of the length of the meridian was done with amazing effort and precision. But a small error was creeping in, and the resulting meter deviated about 0.2% from its ideal value. (For the whole story of how this happened, see Ken Alders The Measure of All Things.) #10005 GeodesyData["ITRF00","MeridianQuadrant"] Finally came June 22, 1799. Louis Antoine de Bougainville, the famous navigator, had a cold and so could not actively execute his responsibilities at the National Institute. Pierre-Simon Laplace, the immortal mathematician whose name we see still everywhere in modern science through his transform, his operator and his demon, had to take his place. Laplace gave a long speech to the Council of Five Hundred (Conseil des Cinq-Cents) and the Council of Ancients (Counseil des Anciens). After his speech, Laplace himself, Lef?vre-Gineau, Monge, Brisson, Coulomb, Delambre, Ha?y, Lagrange, M?chain, Prony and Vandermonde; the Foreign commissionaires Bugge (from Denmark), van Swinden and Aeneae (from Batavia), Tralles (from Switzerland), Ciscar and Pedrayes (from Spain), Balbo, Mascheroni, Multedo, Franchini and Fabbroni (from Italy); and the two instrument makers Lenoir and Fortin took coaches over to the National Archives and deposited the meter and the kilogram in a special safe with four locks. The group also had certified measurements; the certificates were deposited as well. Something similar has happened today, once again in Paris. Over the last three days, the General Conference on Weights and Measures (CGPM) held its 26th quadrennial meeting. Their first meeting 129 years ago established the meter and kilogram artifacts of 1889 as international standards. The culmination of todays meeting was a vote on whether the current definition of the kilogram as a material artifact will be replaced by an exact value of the Planck constant. Additionally, the electron charge, the Boltzmann constant and the Avogadro constant will also get exact values (the speed of light has had an exact value since 1983). Every few years, new values (with uncertainties) have been published for the fundamental constants of physics, by CODATA. Back in 1998 the value of the electron charge was . The latest published value is . This morning, it was decided that soon it will be exactly and it will always be this, forever. But what exactly does it mean for a fundamental constant to have an exact value? It is a matter of the defining units. When a unit (like a coulomb) is exactly defined, then determining the value of the charge of an electron becomes a precision measurement task (a path followed for 100+ years since Millikans 1909 droplet experiments). When the value of the elementary charge is exactly defined, realizing 1 coulomb becomes a task of precision metrology. The situation is similar for the other constants: give the constant an exact defined value, and use this exact value to define the unit. Most importantly, the Planck constant will get an exact value that will define the kilogram, the last unit that is still defined through a manmade artifact. Over the past decades, scientists have measured the Planck constant, the electron mass, the Boltzmann constant and the Avogadro constant through devices that were calibrated with base units of kilogram, ampere, kelvin and mole. In the future, the values of the constants will be exact numbers that define the units. The resulting system is the natural revision of the SI, more simply called the metric system. To emphasize the new, enlarged dependence on the fundamental constants of physics, this revision has been called the new SI (or, sometimes, the constants-based SI). Today, a revolution in measurement happened. Here is a slide from Bill Phillips talk: Todays vote completes a process foreseen by James Clerk Maxwell in 1871. This process started in 1892 when Michelson (known for the famous MichelsonMorley experiment for the nonexistence of the aether) connected the length of a meter with the wavelength of a cadmium line. The process advanced more recently in 1983 when the speed of light changed from a measured value to an exact constant of size meters per second that today defines the meter. Reading through Laplaces speech from June 22, 1799, is interesting. Here are five paragraphs from his speech: We have always felt some of the advantages that the uniformity of weights and measures will have. But from one country to another and in the very interior of each country, habit, prejudices were opposed on this point to any agreement, any reform. It was therefore necessary to find the principle in Nature, that all nations have an equal interest in observing and choosing it, so far as its convenience could determine all minds. This unity, drawn from the greatest and most invariable of bodies which man can measure, has the advantage of not differing considerably from the half-height and several other measures used in different countries; common opinion. Overcoming a multitude of physical and moral obstacles, they have been acquitted with a degree of perfection of which we have had no idea until now. And in securing the measure they were asked, they have collected and demonstrated in the figure of the Earth the irregularity of its flattening, truths as curious as new. But if an earthquake engulfed, if it were possible that a frightful blow of lightning would melt the preservative metal of this measure, it would not result, Citizen Legislators, that the fruit of so many works, that the general type of measures could be lost for the national glory, or for the public utility. Many parallels could be drawn to today. International trade without a common system of units is unimaginable. As in the 1790s, dozens of scientists around the world have labored for decades to make as precise as possible with current technology measurements of the Planck and other constants, a precision unimaginable even 50 years ago. And like 219 years ago, defining the new units has been an international effort. And although the platinum meter and kilogram have endured well and fortunately no earthquake or lightning has hit them, the new definitions are truly resistant against any natural catastrophe, and are even suitable for sharing with aliens. Laplace addressed the Councils one week after van Swinden (one of the foreign delegates) had published the scientific and technical summary of all operations that were involved in the creation of the metric system. Once the new system was established, its use would be mandated by the French government. Here is a letter from the end of 1799, written from the interior minister Fran?ois de Neufch?teau to the Northern Department of France ordering the use of the new measures. Despite the governments efforts, the metric system would not displace old measures for 40 years in France (we can blame this largely on Napol?on). And one of the last professions to adopt the new measures was medicine. Only in January of 1802 was it even considered. In contrast, the proposed revised SI was accepted today, and will take effect in just 185 days on May 20, 2019, World Metrology Day. The 2019 SI will come in much more quietly. Some newspapers have occasionally reported on the experiments. But just as with the original SI, today not everybody is 100% happy with the new system, e.g. some chemists do not like decoupling the mole from the kilogram. The story that leads to today covers the making of an exact replica of the kilogram from the late 1790s in the 1880s, as well as a slightly improved version of the platinum meter bar. This kilogram, also called the International Prototype of the Kilogram (IPK), is still today the standard of the unit of mass. As such, it is today the last artifact that is used to define a unit. The metric system in its modern form is de facto used everywhere in science, technology, commerce, trade and daily life. All US-customary measures are defined and calibrated through the metric standards. As a universal measurement standard, it was instrumental in quantifying and quantitatively describing the world. tous les temps, tous les peuples (For all times, for all people) were the words that were planned for a commemorative medal that was suggested on September 9, 1799 (23 fructidor an 7), to be minted to honor the creation of the metric system. (Similar to the metric system itself, the medal was delayed by 40 years.) Basing our units on some of the most important fundamental constants of physics bases them on the deepest quantifying properties of our universe, and at the same time defines them for all times and for all people. So what exactly is the new SI? The metric system started with base units for time, length and mass. Today, SI has seven base units: the second, the meter, the kilogram, the ampere, the kelvin, the mole and the candela. The so-called SI Brochure is the standard document that defines the system. The currently active definitions are: s: the second is the duration of periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom m: the meter is the length of the path traveled by light in a vacuum during a time interval of 1/299 792 458 of a second kg: the kilogram is the unit of mass; it is equal to the mass of the IPK A: the ampere is that constant current that, if maintained in two straight parallel conductors of infinite length and of negligible circular cross-sections and placed one meter apart in a vacuum, would produce between these conductors a force equal to 2 ? newtons per meter of length K: the kelvin, the unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water mol: the mole is the amount of substance in a system that contains as many elementary entities as there are atoms in 0.012 kilograms of carbon-12 cd: the candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 540 ? hertz and that has a radiant intensity in that direction of 1/683 watt per steradian Some notes to these official definitions: The kilogram is defined relative to a human-made artifact, the IPK. The IPK is a better platinum-quality replica of the original late-18th-century kilogram made by Fortin with the platinum from Janety. The definition of the ampere that involves infinitely long, infinitesimally thick wires is not very practical. The definition of the kelvin uses a material macroscopic substance, namely water. With its reference to the kilogram, the definition of the mole is strictly coupled to the kilogram. The proposed definitions of the new SI, based on fixed values of the fundamental constants, are available from the draft of the next edition of the SI Brochure. First the importance and values of the constants are postulated. The SI is the system of units in which: the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom is Hz the speed of light in vacuum c is m/s the Planck constant is J s the elementary charge is the Boltzmann constant is the Avogadro constant is the luminous efficacy of monochromatic radiation of frequency hertz is 683 lm/W The definitions now read as follows: s: The second, symbol s, is the SI unit of time. It is defined by taking the fixed numerical value of the caesium frequency , the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be when expressed in the unit Hz, which is equal to . m: The meter, symbol m, is the SI unit of length. It is defined by taking the fixed numerical value of the speed of light in vacuum to be when expressed in the unit , where the second is defined in terms of the caesium frequency . kg: The kilogram, symbol kg, is the SI unit of mass. It is defined by taking the fixed numerical value of the Planck constant h to be when expressed in the unit J s, which is equal to kg , where the meter and the second are defined in terms of c and . A: The ampere, symbol A, is the SI unit of electric current. It is defined by taking the fixed numerical value of the elementary charge e to be , when expressed in the unit C, which is equal to A s, where the second is defined in terms of . K: The kelvin, symbol K, is the SI unit of thermodynamic temperature. It is defined by taking the fixed numerical value of the Boltzmann constant k to be when expressed in the unit J , which is equal to kg , where the kilogram, meter and second are defined in terms of h, c and . mol: The mole, symbol mol, is the SI unit of amount of substance. One mole contains exactly elementary entities. This number is the fixed numerical value of the Avogadro constant, , when expressed in the unit and is called the Avogadro number. cd: The candela, symbol cd, is the SI unit of luminous intensity in a given direction. It is defined by taking the fixed numerical value of the luminous efficacy of monochromatic radiation of frequency 540 ? Hz, , to be 683 when expressed in the unit lm , which is equal to cd sr , or cd sr , where the kilogram, meter and second are defined in terms of h, c and . Compared with the early 2018 SI definitions, we observe: Any reference to material artifacts or macroscopic objects has been eliminated. The definitions are all based on fundamental constants of physics. The building up of the base units is much more recursive than it was before. The mole has become an independent unit (de facto a dimensional counting unit). The ampere had the practically unrealizable definition through infinite wires, but is now conceptually and for small currents through single-electron pumps already practically reduced to a counting-like operation. The 200+-year-old idea to base our units on nature has been implemented on a much deeper conceptual level. To connect units with fundamental constants, we need two more ingredients: fundamental constants and physical laws. The ontological connection between units and fundamental constants has to go through physical laws, concretely these three famous laws of physics: a) ; b) for the definition of the kilogram through the Planck constant; and c) for defining the kelvin through the Boltzmann constant. For the kilogram, to connect the Planck constant with a mass, one follows de Broglie and equates with to arrive at , which connects mass with the Planck constant. Determining the values of the constants to a precision allows us to supersede the old definitions, and thus ensures that the associated changes in the values of the units will have no disruptive influence on any measurement is a remarkable success of modern science. Two hundred twenty years ago, not everybody agreed on the new system of units. The base (10 or 12) and the naming of the units were frequent topics of public discussion. Here is a full-page newspaper article with a suggestion of a slightly different system than the classic metric system. Now lets come back to the fundamental constants of physics. From a fundamental physics point of view, it is not a priori clear that the fundamental constants are constant over great lengths of time (billions of years) and distance in the universe. But from a practical point of view, they seem to be as stable as anything could be. What is the relative popularity of the various fundamental constants? The arXiv preprint server, with its nearly one million physics preprints, is a good data source to answer this question. Here is a breakdown of the frequencies with which the various fundamental constants are explicitly mentioned in the preprints. (The cosmological constant only became so popular over the last three decades, and it is a constant current unsuitable for defining units.) There is a lot of philosophical literature, theoretical physics and numerology literature about fundamental constants and their meaning, values and status within the universe (or multiverse) and so on. Why do the constants have the values they have? Is humankind lucky that the constants have the values they have (e.g. only minute changes in the values of the constants would not allow stars to form)? Fundamental constants allow many back-of-the-envelope calculations as they govern all physics around us. Here is a crude estimation for the height of a giraffe in terms of the electron and proton mass, the elementary charge, the Coulomb constant , the gravitational constant and the Bohr radius : #10005 (Subscript[m, e]/Subscript[m, p])^(1/20) ((? e^2)/(G Subscript[m, p]^2))^(3/10) Subscript[a, 0]//UnitConvert[#,"Feet"] This is not the place to review this literature of the theory and uses of fundamental constants, or to contribute to it. Rather, lets use the Wolfram Language to see how fundamental constants can be used in actual computations. Fundamental constants are tightly integrated into the computational subsystem of the Wolfram Language that deals with units, measures and physical laws. The fundamental constants are in the upper-left yellow box of the network graphic: These are the five constants from the new SI and their current values expressed in SI base units: #10005 siConstants = {Quantity[1, "SpeedOfLight"], Quantity[1, "PlanckConstant"], Quantity[1, "ElementaryCharge"], Quantity[1, "BoltzmannConstant"], Quantity[1, "AvogadroConstant"]}; #10005 Grid[Transpose[{siConstants/. 1->None, UnitConvert[siConstants,"SIBase"]}],Dividers->Center,Alignment->Left]//TraditionalForm Physical constant are everywhere in physics. We use the function FormulaData to get some examples. #10005 siConstantNames=Alternatives @@ (Last/@ siConstants) #10005 formulas=Select[{#,FormulaData[#] /. Quantity[a_,b_. "ReducedPlanckConstant"^exp_.] :> Quantity[a/(2Pi),b "PlanckConstant"^exp]} /@ FormulaData[], MemberQ[#, siConstantNames, ?] ]; The left column shows the standard names of the formulas, and the right column shows the actual formulas. The symbols (E, T, ) are all physical quantity variables of PhysicalQuantity[...,...]). Here are the shortest formulas that contain a physical constant: #10005 makeGrid[data_] := Grid[{Column[Flatten[{#}]],#2} @@@ Take[ SortBy[data,LeafCount[Last[#]] ], UpTo[12]], Dividers->Center,Alignment->Left] #10005 makeGrid[formulas] And here are the formulas that contain at least three different physical fundamental constants: #10005 makeGrid[Select[formulas, Length[Union[Cases[#,siConstantNames, ?]]] 2 ]] One of the many entity types included in the Wolfram Knowledgebase is fundamental constants. There is some discussion in the literature of what exactly constitutes a constant. Are any dimensional constants really physical constants, or are they just artifacts of our units? Do only dimensionless coupling constants, typically around 26 in the standard model of particle physics, describe the fabric of our universe? We took a liberal approach and also included derived constants in our data as well as anthropologically relevant values, such as the Suns mass, that are often standardized by various international bodies. This gives a total of more than 210. #10005 EntityValue["PhysicalConstant","EntityCount"] Expressed in SI base units, the constants span about 160 orders of magnitude. Converting all constants to Planck units gives dimensionless values for the constants and allows for a more honest and faithful representation. #10005 toPlanckUnits[u_?NumberQ ]:=Abs[u] toPlanckUnits[Quantity[v_,u_] ]:= Normal[UnitConvert[Abs[v] u/. {"Meters" -> 1/Quantity[1, "PlanckLength"], "Seconds" -> 1/Quantity[1, "PlanckTime"], "Kilograms" -> 1/Quantity[1, "PlanckMass"], "Kelvins"->1/Quantity[1, "PlanckTemperature"], "Amperes" ->1/ Quantity[1,"PlanckElectricCurrent"]},"SIBase"] /. ("Meters"|"Seconds"|"Kilograms"|"Kelvins"|"Amperes"):>1] #10005 constantsInPlanckUnits=SortBy[Cases[{#1, toPlanckUnits@UnitConvert[#2,"SIBase"]} @@@ EntityValue["PhysicalConstant",{"Entity", "Value"}],{_,_?NumberQ}],Last]; #10005 ListLogPlot[MapIndexed[Callout[{#2[[1]],#1[[2]] }, #[[1]]] ,constantsInPlanckUnits], PlotStyle -> PointSize[0.004],AspectRatio->1,GridLines->{{},{1}}] Because the values of the constants span many orders of magnitude, one expects the first digits to obey (approximately) Benfords law. The yellow histogram shows the digit frequencies of the constants, and the blue shows the theoretical predictions of Benford s law. #10005 Show[{Histogram[{First[RealDigits[N@#2]][[1]] @@@constantsInPlanckUnits, WeightedData[Range[9], Table[Log10[1+1/d],{d, 9}]]},{1},"PDF", Ticks -> {None, Automatic}, AxesLabel -> {"first digit", "digit frequency"}],Graphics[Table[Text[Style[k, Bold],{k, 0.03}],{k,9}]]}] Because of the different magnitudes and dimensions, it is not straightforward to visualize all constants. The function FeatureSpacePlot allows us to visualize objects that lie on submanifolds of higher-dimensional spaces. In the following, we take the magnitudes and the unit dimensions of the constants into account. As a result, dimensionally equal or similar constants cluster together. #10005 constants=Select[{#1, UnitConvert[#2,"SIBase"]} @@@ EntityValue["PhysicalConstant", {"Entity","Value"}],Not[StringMatchQ[#[[1,2]], (___~~("Jupiter"|"Sun"|"Jupiter")~~___)]] ]; #10005 siBaseUnits={"Seconds","Meters","Kilograms","Amperes","Kelvins","Moles","Candelas"}; #10005 constantsData=Cases[{ Log10[Abs[N@QuantityMagnitude[#2] ]], N@Normalize[Unitize[Exponent[QuantityUnit[#2],siBaseUnits]]], #1} @@@(constants /.{"Steradians"->1} ), {_?NumberQ,_,_}]; #10005 allSIConstants=(Entity["PhysicalConstant",#] /@ {"AvogadroConstant","PlanckConstant","BoltzmannConstant","ElementaryCharge", "SpeedOfLight","Cesium133HyperfineSplittingFrequency"}); The poor Avogadro constant is so alone. :-) The reason for this is that not many named fundamental constants contain the mole base unit. #10005 (FeatureSpacePlot[Callout[(1 + #2/100) #2,Style[#3, Gray]] @@@ constantsData , ImageSize ->1600,Method -> "TSNE",ImageMargins->0,PlotRangePadding->Scaled[0.02], AspectRatio->2,PlotStyle->PointSize[0.008]]/.((#->Style[#, Darker[Red]]) /@allSIConstants) )// Show[#, ImageSize -> 800] They are (not mutually exclusively) organized in the following classes of constants: #10005 EntityClassList["PhysicalConstant"] As much as possible, each constant has the following set of properties filled out: #10005 EntityValue["PhysicalConstant","Properties"] Most properties are self-explanatory; the L?vy?Leblond class might not be. A classic paper from 1977 classified the constants into three types: Type A: physical properties of a particular object Type B: constants characterizing whole classes of physical phenomena Type C: universal constants Here are examples of constants from these three classes: #10005 typedConstants=With[{d=EntityValue["PhysicalConstant",{"Entity", "LevyLeblondClass"}]}, Take[DeleteCases[First/@ Cases[d, {_,#}],Entity["PhysicalConstant","EarthMass"]],UpTo[10]] /@ {"C","B","A"}]; #10005 TextGrid[Prepend[PadRight[typedConstants,Automatic,""]//Transpose, Style[#,Gray] /@ {"Type C", "Type B", "Type A"}],Dividers->Center] Physicists most beloved fundamental constant is the fine-structure constant (or the inverse, with an approximate value of 137). As it is a genuinely dimensionless constant, it is not useful for defining units. #10005 EntityValue[Entity["PhysicalConstant","InverseFineStructureConstant"],{"Value","StandardUncertainty"}] //InputForm There are many ways to express the fine-structure constant through other constants. Here are some of them, including the von Klitzing constant , the impedance of the vacuum , the electron mass , the Bohr radius and some others: #10005 (Quantity[None,"FineStructureConstant"]==#/.Quantity[1,s_String]:> Quantity[None,s]) /@ Entity["PhysicalConstant", "FineStructureConstant"]["EquivalentForms"]//Column//TraditionalForm This number puzzled and continues to puzzle physicists more than any other. And over the last 100 years, many people have come up with conjectured exact values of the fine-structure constant. Here we retrieve some of them using the "ConjecturedValues" property and display their values and the relative differences to the measured value: #10005 alphaValues=Entity["PhysicalConstant","FineStructureConstant"]["ConjecturedValues"]; #10005 TextGrid[{Row[Riffle[StringSplit[StringReplace[#1,(DigitCharacter~~__):>""],RegularExpression["(?=[$[:upper:]])"]]," "]], "Year"/.#2,"Value"/.#2,NumberForm[Quantity[100 (N[UnitConvert[("Value"/.#2)/?,"SIBase"]]-1),"Percent"],2]} @@@ DeleteCases[alphaValues,"Code2011"->_],Dividers->All, Alignment->Left] Something of great importance for the fundamental constants is the uncertainty of their values. With the exception of the fundamental constants that now have defined values, fundamental constants are measured, and every experiment has an inherent uncertainty. In the Wolfram Language, any number can be precision tagged, e.g. here is ? to 10 digits: #10005 ?10=3.1415926535`10 The difference to ? is zero within an uncertainty/error of the order : #10005 Pi-?10 Alternatively, one can use an interval to encode an uncertainty: #10005 ?10Int = Interval[{3.141592653,3.141592654}] #10005 Pi-?10Int When using precision-tagged, arbitrary-precision numbers as well as intervals in computations, the precision (interval width) is computed, and does represent the precision of the result. In the forthcoming version of the Wolfram Language, there will be a more direct representation of numbers with uncertainty, called Around (see episode 182 of Stephen Wolframs Live CEOing livestream). For a natural (one could say canonical) use of this function, we select five constants that have exact values in the new SI: #10005 newSIConstants=ToEntity/@ {c,h,e,k,Subscript[N, A]} These five fundamental constants are (of course) dimensionally independent. #10005 DimensionalCombinations[{}, IncludeQuantities -> {Quantity[1, "SpeedOfLight"], Quantity[1, "PlanckConstant"], Quantity[1, "ElementaryCharge"], Quantity[1, "BoltzmannConstant"], Quantity[1, "AvogadroConstant"]}] If we add and , then we can form a two-parameter family of dimensionless combinations. #10005 DimensionalCombinations[{}, IncludeQuantities -> Join[{Quantity[1, "SpeedOfLight"], Quantity[1, "PlanckConstant"], Quantity[1, "ElementaryCharge"], Quantity[1, "BoltzmannConstant"], Quantity[1, "AvogadroConstant"]}, {Quantity[1, "MagneticConstant"], Quantity[1, "ElectricConstant"]}]] Lets take the Planck constant. CODATA is an international organization that every few years takes all measurements of all fundamental constants and calculates the best mutually compatible values and their uncertainties. (Through various physical relations, many fundamental constants are related to each other and are not independent.) For instance, the values from the last 10 years are: #10005 hValues={#1, {"Value","StandardUncertainty"}/.#2} @@@Take[Entity["PhysicalConstant", "PlanckConstant"]["Values"], 5] PS: The strange-looking value of is just the reduced fraction of the previously stated new exact value for the Planck constant when the value is expressed in units of Js. #10005 662607015/100000000*10^-34 Here are the proposed values for the four constants , , and : #10005 {hNew,eNew,kNew,NAnew}=("Value"/.("CODATA2017RecommendedRevisedSI"/.#["Values"])) /@ Rest[newSIConstants] Take, for instance, the last reported CODATA value for the Planck constant. The value and uncertainty are: #10005 hValues[[4,2]] We convert this expression to an Around. #10005 toAround[{value:Quantity[v_,unit_], unc:Quantity[u_,unit_]}] := Quantity[Around[v,u], unit] toAround[HoldPattern[Around[Quantity[v_,unit_],Quantity[u_,unit_]]]]:= Quantity[Around[v,u],unit] toAround[{v_?NumberQ, u_?NumberQ}] := Around[v,u] toAround[pc:Entity["PhysicalConstant",_]] := EntityValue[pc,{"Value","StandardUncertainty"}] #10005 toAround[hValues[[4,2]]] Now we can carry out arithmetic on it, e.g. when taking the square root, the uncertainty will be appropriately propagated. #10005 Sqrt[%] Now lets look at a more practical example: what will happen with , the permeability of free space after the redefinition? Right now, it has an exact value. #10005 Entity["PhysicalConstant", "MagneticConstant"]["Value"] Unfortunately, keeping this value after defining h and e is not a compatible solution. We recognize this from having a look at the equivalent forms of .  (14)

6. Martian Commutes and Werewolf Teeth: Using Wolfram|Alpha for Writing Research., 14 .[−]

This post was initially published on Tech-Based Teaching, a blog about computational thinking, educational technology and the spaces in between. Rather than prioritizing a single discipline, Tech-Based Teaching aims to show how edtech can cultivate learning for all students. Past posts have explored the value of writing in math class, the whys and hows of distant reading and the role of tech in libraries.



Its November, also known as National Novel Writing Month (NaNoWriMo). This annual celebration of all things writerly is the perfect excuse for would-be authors to sit down and start writing. For educators and librarians, NaNoWriMo is a great time to weave creative writing into curricula, be it through short fiction activities, campus groups or library meet-ups.

During NaNoWriMo, authors are typically categorized into two distinct types: pantsers, who write by the seat of their pants, and plotters, who are meticulous in their planning. While plotters are likely writing from preplanned outlines, pantsers may need some inspiration.

Thats where Wolfram|Alpha comes in handy.

Wolfram|Alpha

Whats in a Name?

Wolfram|Alpha can help you name your characters. By typing in “name” plus the name itself, you can find out all sorts of info: when the name was most popular, how common it is and more. If you place a comma between two names, you can compare the two.

For example, lets say youre writing a road-trip story featuring two women named “Sarah” and “Sara.” You type in “name sarah, sara” and see the following:

Sarah and Sara

Sara and Sarah ages

Wolfram|Alpha shows that both names were common around the same time, but one is more likely for a woman whos just slightly older. You can make Sara the older of the two by a hair, and her age can be a point of characterization. The extra year makes her extra wiseor extra bossy.

What if you want to write about a male character? Lets explore two possibilities, Kevin and Alan.

Kevin vs. Alan

By viewing the charts in Wolfram|Alpha, we can see that one name is much more common, but both skew older. What if you try searching for another name, like Dominic?

Dominic name
Dominic info

Additionally, we can see that Dominic is a name with a history, with Wolfram|Alpha showing tidbits such as the fact that it was often used for boys born on Sundays. If youre a pantser, this information is something to file away for later.

Of course, you can always look for popular names if youre setting your work in the modern day:

Popular girl names

Currency Conversions, Travel Plans and Blood Alcohol Levels

So, Sarah and Sara are on their trip. Lets say that theyre small-town southern girls who happened to meet because of their shared first name, but youre not sure what town fits the bill. You can look for cities in North Carolina with a population of under 2,000 people:

Cities in NC

From there, you can calculate the price of gas and other costs of living. The small details you uncover can help with world-building, particularly if the story is set slightly in the past. You can also compare facts about different cities:

Gas prices

If spontaneous Sarah didnt plan for her trip as a well as staid Sara, then you can calculate just how off the mark she wasparticularly with an international journey.

Wolfram|Alpha provides currency conversions, so if the ladies trip somehow takes them to the UK, then you can determine just how much their trip savings are currently worth:

Dollars to pounds conversion

Even beyond finances or travel planning, Wolfram|Alpha can help ground a plot in reality. Lets say Sarah and Sara end up at a pub. How many bottles of hard cider can Sarah enjoy before things go pear-shaped?

How many drinks
BAC chart

The process of figuring out the physical details of your characters can help you visualize them better too!

Lets Get Metaphysical

Beyond providing real-life calculations that are useful in everyday situations, Wolfram|Alpha can help to add a touch of realism to genre fiction. For example, going back to our friend Dominic well, hes a vampire. He was born in 1703, on a Sunday to tie in with his name. But on what date, exactly? We can view our 1703 calendar with a query of “January 1703”:

January 1703

From this screen, we can also see his age relative to today, putting him at well over three centuries old. We can also see that there was a full Moon on January 3. Could you use this as a plot point? Perhaps hes stronger against sunlight than the average vampire due to the full Moon reflecting more of the Suns rays.

If youre a pantser, these sorts of searches can be extra helpful for inspiring new plot or character developments. While you may not have initially set out to create a full Moonenhanced vampire, name searches and looking up past events lit that spark of inspiration.

Real Science, Real Fiction

Realistic physical properties can be especially helpful for sci-fi writers, particularly those writing hard sci-fi. While there are some example Wolfram|Alpha searches for sci-fi “entertainment” on this page, many of which relate to preexisting genre media, you can also use astronomy searches to enhance your sci-fi setting.

In a previous search, “Emma” came up as a popular name. Maybe its still popular when, decades in the future, weve colonized Mars.

In this sci-fi future, weve normalized lightspeed travel. To figure out Emmas commute, you can use formulas to measure the amount of time it would take to travel from place to place. If Emma works at a Martian university, then you can see how long it would take for a lightspeed bus to shuttle her to the office:

Speed of light travel

She would hardly have time to read through her newsfeed on her holo-headset before the bus dropped her off at work!

For science fiction plots set in a time period closer to todays tech, you can calculate totals using Wolfram|Alphas many included formulas. For example, you can figure out volts and amps for a maker using Ohms law, or even run through a linear regression or two for a fictional AI assistant.

Brainstorming the Uncanny

Because Wolfram|Alpha is a “computation engine,” it also provides general facts that can help you come up with ideas for charactersand monsters.

For horror writers, the bare facts can provide a perfect starting point for tweaking reality ever so slightly into the uncanny valley.

For example, lets say you have werewolves in your story. These arent friendly werewolves, though: theyre the eldritch kind that give passersby the heebie-jeebies. Going by the one small tweak rule, you can compare the number of teeth in a dogs mouth to the amount in a humans mouth:

Werewolf teeth

What if your werewolves have too-toothy smiles because they have a few too many incisors, matching up with the amount found in a wolfs mouth? Are dentists hunted down if they discover the truth?

Murder, She Searched

Mystery writers can also discover interesting things on Wolfram|Alpha, from chemical compositions to ciphers. With the latter, there are several word-puzzle tools you can use to create clues for a crime scene. For example, by using underscores in your searches, you can build Hangman-like messages from blanks and letters:

Blanks

Wolfram|Alpha also has a text-to-Morse converter, allowing you to convert normal text to dots and dashes. Perhaps a sidekick is attempting to get in touch with a wily detective without kidnappers noticing whats going on:

Door unlocked

For a mystery set in the past, you can use a date search to determine the sunrise, sunset and weather patterns of any given day. While this data is invaluable for historic writersthe books they write are all about historical accuracy, after allit can also help you determine how an old-timey crime might have gone down. For example, the witness couldnt have seen the Sun peeking through the blinds at 5:35am because sunrise hadnt happened yet:

Seattle sunrise

If youre trying to come up with ideas on the fly, having an all-in-one spot to search for facts and figures can be invaluable. For more topic suggestions, check out this page to see other example search ideas separated into categories.

Hopefully these ideas have sparked your interest, whether for your own personal NaNo journey or for a library- or classroom-based NaNoWriMo project. Feel free to share this post with other writers or educators if youve found it to be useful. And even after November draws to a close, continue mining Wolfram|Alpha for story ideas. Write on!

 (0)

7. Wolfram U Presents: Wolfram Technology in Action., 09 .[−]

Wolfram Technology in Action

Join Wolfram U for Wolfram Technology in Action: Applications & New Developments, a three-part web series showcasing innovative applications in the Wolfram Language.

Newcomers to Wolfram technology are welcome, as are longtime users wanting to see the latest functionality in the language.

Web Series Overview

The series is modeled after the three different tracks offered at our recent Wolfram Technology Conference, covering data science and AI (November 14), engineering and modeling (November 28) and mathematics and science (December 12). Each webinar will feature presentations shared at the Wolfram Technology Conference, so if you weren’t able to attend this year, you can still take part in some of the highlights.

Additional presentations will be given live during each webinar by Wolfram staff scientists, application developers, software engineers and Wolfram Language users who apply the technology every day to their business operations and research.

Webinar 1: Data Science and AI

At the Data Science and AI webinar on November 14, learn how to build applications using models from the Wolfram Neural Net Repository, including an overview of some of the newest models available for classification, feature extraction, image processing, speech, audio and more. We will also show some applications built by students from the Wolfram Summer Programs, and we’ll perform real-time examples of model training with data.

The Data Science and AI webinar will conclude with a real-world example applying computer vision tasks to digital pathology for the purposes of cancer diagnosis. Get a preview of the webinar content and learn more about Summer School projects by visiting the Wolfram Community posts on Rooftop Recognition for Solar Energy Potential and Using Machine Learning to Diagnose Pneumonia from Chest X-Rays.

Register Now

You can join any or all of the webinars to benefit from the series. You only need to sign up once to save your seat for this webinar and the sessions that follow. When you sign up, you’ll receive an email confirming your registration, as well as reminders for upcoming sessions.

Dont miss this opportunity to engage with other users and experts of the Wolfram Language!


Wolfram U is a free and open learning hub for students, professionals and learners of all stripes. Explore interactive courses on a variety of topics, get up to speed with the Wolfram Language and take advantage of scheduled, free webinars led by Wolfram experts.

 (0)

8. The Wolfram Technology Conference 2018 Livecoding Championship: A Recap., 01 .[−]

For the third year in a row, the annual Wolfram Technology Conference played host to a new kind of esportthe Livecoding Championship. Expert programmers competed to solve challenges with the Wolfram Language, with the goal of winning the championship tournament belt and exclusive bragging rights.

Wolfie with tournament belt

This year I had the honor of composing the competition questions, in addition to serving as live commentator alongside trusty co-commentator (and Wolfram’s lead communications strategist) Swede White. You can view the entire recorded livestream of the event herepopcorn not included.

Commentators

Right: Swede White and the author commentating. Left: Stephen Wolfram and the author.

This year’s competition started with a brief introduction by competition room emcee (and Wolfram’s director of outreach and communications) Danielle Rommel and Wolfram Research founder and CEO Stephen Wolfram. Stephen discussed his own history with (non-competitive) livecoding and opined on the Wolfram Language’s unique advantages for livecoding. He concluded with some advice for the contestants: “Read the question!”

Question 1: A New Kind of Science Bibliography Titles


Use the Wolfram Data Repository to obtain the titles of the books in Stephen Wolfram’s library that were used during the creation of A New Kind of Science. Return the longest title from this list as a string.

After a short delay, the contestants started work on the first question, which happened to relate to Stephen Wolfram’s 2002 book A New Kind of Science. Stephen dropped by the commentators’ table to offer his input on the question and its interesting answer, an obscure tome from 1692 with a 334-character title (Miscellaneous Discourses Concerning the Dissolution and Changes of the World Wherein the Primitive Chaos and Creation, the General Deluge, Fountains, Formed Stones, Sea-Shells found in the Earth, Subterraneous Trees, Mountains, Earthquakes, Vulcanoes, the Universal Conflagration and Future State, are Largely Discussed and Examined):

Miscellaneous Discourses

Question 2: Countries Closest to a Disk


Find the country whose Polygon is closest to a disk. More specifically, find the country that minimizes . Work in the equirectangular projection. Return a Country entity. Do not use GeoVariant.

The second question turned out to be quite challenging for our contestantsknowledge of a certain function, DiscretizeGraphics, was essential to solving the question, and many contestants had to spend precious time tracking down this function in the Wolfram Language documentation.

Countries

Question 3: Astronaut Timelines


Plot the dates of birth of the six astronauts/cosmonauts who were crewmembers on the most manned space missions. Return a TimelinePlot (a Graphics object) called with an association of Person entity -> DateObject rules, with all options at their defaults.

The contestants stumbled a bit in interpreting the third question, but they figured it out relatively quickly. However, some technical issues led to some exciting drama as the judges deliberated on who to hand the third-place point to. Stephen made a surprise return to the commentators’ table to talk about astronaut Michael Foale’s unique connection to Wolfram Research and Mathematica. I highly recommend reading Michael’s fascinating keynote address, Navigating on Mir, given at the 10th-anniversary Mathematica Conference.

Astronauts/cosmonauts

Question 4: Centroid in Capital


Only one US state (excluding the District of Columbia) has a geographic centroid located within the polygon of its capital city. Find that state and return it as an AdministrativeDivision entity.

Wolfram Algorithms R&D department researcher Jos? Mart?n-Garc?a joined the commentators’ table for the fourth question. Jos? worked on the geographic computation functionality in the Wolfram Language, and helped explain to our audience some of the technical aspects of this question, such as the mathematical concept of a geometric centroid. Solving this question involved the same DiscretizeGraphics function that tripped up contestants on question 2, but it seems that this time they were prepared, and produced their solutions much more quickly.

Geographic centroid

Question 5: Periodic Pie


Retrieve the material color of each Element in the periodic table, as given by the Color interpreter. Discard any Missing or Colorless values, and return a PieChart Graphics object with a sector for each color, where each sector’s width is proportional to the number of elements with that color, and the sector is styled with the color. Sort the colors into canonical order by count before generating the pie chart, so the sector sizes are in order.

The fifth question was, lengthwise, the most verbose in this year’s competition. For every question, the goal is to provide as much clarity as possible regarding the expected format of the answer (as well as its content), which this question demonstrates well. The last sentence is particularly important, as it specifies that the pie “slices” are expected to be in ascending order by size, which ensures that the pie chart looks the same as the expected result. This aspect took our contestants a few tries to pin down, but they eventually got it.

PieChart

Question 6: A.I.-braham Lincoln


The neural net model NetModel["Wolfram English Character-Level Language Model V1"] predicts the next character in a string of English text. Nest this model 30 times (generating 30 new characters) on the first sentence in the Gettysburg Address, as given by ExampleData and TextSentences. Return a string.

The sixth question makes use of not only the Wolfram Language’s unique symbolic support for neural networks, but also the recently launched Wolfram Neural Net Repository. You can read more about the repository in its introductory blog post.

This particular neural network, the Wolfram English Character-Level Language Model V1, is trained to generate English text by predicting the most probable next character in a string. The results here might be improbable to hear from President Lincoln’s mouth, but they do reflect the fact that part of this model’s training data consists of old news articles!

Lincoln output

Question 7Actually, Question 10: Eclipses per Year


Of the many total solar eclipses to occur between now and December 31st, 2100, two will happen during the same year. Find that year and return it as an integer.

For the seventh and last question of the night, our judges decided to skip ahead to the tenth question on their list! We hadn’t expected to get to this question in the competition and so hadn’t lined up an expert commentator. But as it turns out, Jos? Mart?n-Garc?a knows a fair bit about eclipses, and he kindly joined the commentators’ table on short notice to briefly explain eclipse cycles and the difference between partial and total solar eclipses. Check out Stephen’s blog post about the August 2017 solar eclipse for an in-depth explanation with excellent visualizations.

Eclipse visualization

(The highlighted regions here show the “partial phase” of each eclipse, which is the region in which the eclipse is visible as a partial eclipse. The Wolfram Language does not yet have information on the total phases of these eclipses.)

The Results

At the end of the competition, the third-place contestant, going under the alias “AieAieAie,” was unmasked as Etienne Bernard, lead architect in Wolfram’s Advanced Research Group (which is the group responsible for the machine learning functionality of the Wolfram Language, among other wonderful features).

Etienne Bernard and Carlo Barbieri
Etienne Bernard and Carlo Barbieri

The contestant going by the name “Total Annihilation” (Carlo Barbieri, consultant in the Advanced Research Group) and the 2015 Wolfram Innovator Award recipient Philip Maymin tied for second place, and both won limited-edition Tech CEO mini-figures!

Left: Philip Maymin. Right: Tech CEO mini-figure.
Left: Philip Maymin. Right: Tech CEO mini-figure.

The first-place title of champion and the championship belt (as well as a Tech CEO mini-figure) went to the contestant going as “RiemannXi,” Chip Hurst!

Chip Hurst

An Astronomical Inconsistency

I wanted to specifically address potential confusion regarding question 3, Astronaut Timelines. This is the text of the question:


Plot the dates of birth of the six astronauts/cosmonauts who were crewmembers on the most manned space missions. Return a TimelinePlot (a Graphics object) called with an association of Person entity -> DateObject rules, with all options at their defaults.

Highly skilled programmer Philip Maymin was one of our contestants this year, and he was dissatisfied with the outcome of this round. Here’s a solution to the question that produces the expected “correct” result:

counts=Counts@Flatten
&#10005

counts = Counts@Flatten[EntityValue["MannedSpaceMission", "Crew"]];
TimelinePlot[
 EntityValue[Keys@TakeLargest[counts, 6],
  EntityProperty["Person", "BirthDate"], "EntityAssociation"]]

And here’s Philip’s solution:

TimelinePlot@EntityValue
&#10005

TimelinePlot@
 EntityValue[
  Keys[Reverse[
     SortBy[EntityValue[
       Flatten@Keys@
         Flatten[EntityClass["MannedSpaceMission",
            "MannedSpaceMission"][
           EntityProperty["MannedSpaceMission", "PrimaryCrew"]]],
       "MannedSpaceMissions", "EntityAssociation"], Length]][[;; 6]]],
   "BirthDate", "EntityAssociation"]

Note the slightly different approachesthe first solution gets the "Crew" property (a list) of every "MannedSpaceMission" entity, flattens the resulting list and counts the occurrences of each "Person" entity within that, while Philip’s solution takes the aforementioned list and checks the length of the "MannedSpaceMission" property for each "Person" entity in it. These are both perfectly valid techniques (although Philip’s didn’t even occur to me as a possibility when I wrote this question), and in theory should both produce the exact same result, as they’re both accessing the same conceptual information, just through slightly different means. But they don’t, and it turns out Philip’s result is actually the correct one! Why is this?

The primary reason for this discontinuity boils down to a bug in the Wolfram Knowledgebase representation of the STS-27 mission of Space Shuttle Atlantis. Let’s look at the "Crew" property for STS-27:

Entity
&#10005

Entity["MannedSpaceMission", "STS27"]["Crew"]

Well, that’s clearly wrong! There’s a "Person" entity for Jerry Lynn Ross in there, but it doesn’t match the entity that canonically represents him within the Knowledgebase. I’ve reported this inaccuracy, along with a few other issues, to our data curation team, and I expect it will be addressed soon. Thanks to Philip for bringing this to our attention!

Conclusion

The inaugural Wolfram Language Livecoding Competition took place at the 2016 Wolfram Technology Conference, and the following year’s competition in 2017 was the first to be livestreamed. We held something of a test-run for this year’s competition at the Wolfram Summer School in June, for which I also composed questions and piloted an experimental second, concurrent livestream for behind-the-scenes commentary. At this year’s Technology Conference we merged these two streams into one, physically separating the contestants from the commentators to avoid “contaminating” the contestant pool with our commentary. We also debuted a new semiautomated grading system, which eased the judges’ workload considerably. Each time we’ve made some mistakes, but we’re slowly improving, and I think we’ve finally hit upon a format that’s both technically feasible and entertaining for a live audience. We’re all looking forward to the next competition!

 (2)

9. The Winners of the 2018 One-Liner Competition., 25 .[−]

Images and machine learning were the dominant themes of submissions to the One-Liner Competition held at this years Wolfram Technology Conference. The competition challenges attendees to show us the most astounding things they can accomplish with 128 or fewer charactersless than one tweetof Wolfram Language code. And astound us they did. Read on to see how.

Honorable Mention
David DeBrota: The Eyes Have It (127 characters)

Davids submission takes first place in the category of creepinessand was timely, given the upcoming Halloween holiday. The judges were impressed by its visual impact:

c=Flatten@DeleteCases
&#10005

c=Flatten@DeleteCases[WebImageSearch["eye iris","Images",MaxItems->120],$Failed];ImageCollage[ConformImages[c[[1;;Length[c]]]]]

David had a character to spare with this submission, so he had no reason to shorten it. But he could have saved 20 characters by eliminating code that was left over from his exploration process. Ill leave it as an exercise for the interested reader to figure out which characters those are.

Honorable Mention
Abby Brown: Flag Mosaic (128 characters)

Abbys submission recreates an image by assembling low-resolution flag images. In order to squeak in at the 128-character limit, she cleverly uses UN flags. Over half of the code is grabbing the flag and dress images; the heart of the rendering work is a compact 60-character application of ImagePartition, Nearest and ImageAssemble:

f=ImageResize
&#10005

f=ImageResize[#,{4,4}]&/@CountryData["UN","Flag"];{i=Entity["Word", "dress"][image],ImageAssemble@Map[Nearest[f,#][[1]]&,ImagePartition[i,4],{2}]}

This One-Liner derives from an activity in Abby’s computational thinking group at Torrey Pines High School. You can download a notebook that describes the activity by clicking the Flag Macaw link on this page.

Dishonorable Honorable Mention
Pedro Fonseca: Average Precision of the ResNet-101 Trained on YFCC100m Geotagged Data (127 characters)

Take a second to consider what this One-Liner does: gets the list of 164,599 city entities in the Wolfram Language, searches the web for an image of each one, applies the ResNet neural network to each image to guess where it was taken and compares that location with the geotagging information in the image to see how precise the neural networks prediction is. This may well be an honorable mention… but wed have to wait 14 hours for the code to evaluate in order to find out:

Mean
&#10005

Mean[GeoDistance[NetModel["ResNet-101 Trained on YFCC100m Geotagged Data"]@WebImageSearch[#[[1]]][1,1],#]&/@EntityList["City"]]

Dishonorable Mention
David DeBrota: Find the Black Disk (128 characters)

I suspect David was fishing for a dishonorable mention with this submission that creates what one judge called the cruelest game of Where’s Waldo ever invented. Your task is to find the black disk among the randomly colored random polygons:

Graphics
&#10005

Graphics[{Table[{RandomColor[],Translate[RandomPolygon["Convex"],{i,j}+RandomReal[{-E,E},2]]},{i,99},{j,99}],Disk[{9E,9E},1/E]}]

Graphics output

What? You cant find the disk?? Heres the output again with the disk enlarged:

Graphics (disk enlarged)
&#10005

Graphics[{Table[{RandomColor[],Translate[RandomPolygon["Convex"],{i,j}+RandomReal[{-E,E},2]]},{i,99},{j,99}],Disk[{9E,9E},5/E]}]

Graphics (disk enlarged)

Note Davids extensive use of the one-letter E to save characters in numeric quantities.

Third Place
Abby Brown: Alphabet of Words (128 characters)

The uniqueness and creativity of this idea moved the judges to award third place to this One-Liner that makes a table of words that are pronounced like letters. Its fun, and it opens the door to further explorations, such as finding words (like season) whose pronunciations begin with a letter name:

w = # -> WordData

w = # -> WordData[#, "PhoneticForm"] &; a = w /@ Alphabet[]; p =
 w /@ WordList[]; Grid@
 Table[{a[[n]], If[a[[n, 2]] === #[[2]], #, Nothing] & /@ p}, {n, 26}]

Like Abbys Flag Mosaic submission, this One-Liner also derives from an activity in Abby’s computational thinking group at Torrey Pines High School. You can download a notebook that describes the activity by clicking the Alpha Words link on this page.

Second Place
Isaac Gelman: Computational Thinking: The Age Distribution 2018 Wolfram Technology Conference Dinner (69 characters)

This was one of the most timely and shortest One-Liners weve yet seen. It answers a question that arose just hours before the end of the competition.

Every Wolfram Technology Conference includes a conference dinner at which Stephen Wolfram hosts an ask me anything session. One of the questions at this years dinner was What is the age and gender distribution of conference attendees?

To answer the age part of that question, Isaac took photos of all of the tables at the dinner, used FacialFeatures to estimate the ages of the people in the photos and made a histogram of the result. We cant vouch for the accuracy of the result, but it seems plausible:

FacialFeatures
&#10005

FacialFeatures["Age"]/@Values[Databin@"ytgvoXyH"]//Flatten//Histogram

Here are the first three photos in the Databin:

Take
&#10005

Take[Values[Databin@"ytgvoXyH"],3]

Congratulations, Isaac, on a brilliant demonstration of computational thinking with Wolfram technology.

First Place
Philip Maymin: Eliza in a Tweet (127 characters)

Our first-place winner encapsulated an homage to Joseph Weizenbaums natural language conversation program, ELIZA, in a single tweet. Philips Eliza often responds with off-the-wall phrases that make it seem either a few cards short of a full deck or deeply profound. But it was the judges first conversation with Eliza, which eerily references current world events, that clinched first place:

While
&#10005

While[StringQ[x=InputString@HELP],Echo@NestWhile[#<>y&,x<>"
",StringFreeQ[",.\"
",y=(e=NetModel)[e[][[-7]]][#,"RandomSample"]]&]]

Eliza

Weizenbaum was aghast that people suggested that ELIZA could substitute for human psychotherapists. The program could not and was never intended to heal patients with psychological illnesses. Philips Eliza, however, could well drive you crazy.


There were 14 submissions to this years competition, all of which you can see in this notebook. Thank you, participants, for showing us once again the power and economy of the Wolfram Language.

 (3)

10. Highlights from the 2018 Wolfram Technology Conference., 23 .[−]

Stephen Wolfram speaking

Last week, Wolfram hosted individuals from across the globe at our annual Wolfram Technology Conference. This year we had a packed program of talks, training, and networking and dining events, while attendees got to see firsthand what’s new and what’s coming in the Wolfram tech stack from developers, our power users and Stephen Wolfram himself.

Networking dinner

The conference kicked off with Stephen’s keynote speech, which rang in at three and a half hours of live demonstrations of upcoming functions and features in Version 12 of the Wolfram Language. Before getting started, Stephen fired up Version 1 of Mathematica on a Macintosh SE/30it’s remarkable that running code written in Version 1 still works in the newest versions of the Wolfram Language. Stephen also shared with us the latest developments in Wolfram|Alpha Enterprise, introduced Wolfram|Alpha Notebooks, new cloud functionalities, the Wolfram Notebook Archive and the local Wolfram Engine, a way for developers to easily access the Wolfram Language without barriers of entry.

Version 12 Is Coming

Stephen Wolfram's keynote speech

Most exciting during Stephen’s keynote was the litany of new features coming in Version 12 of the Wolfram Language. Stephen ticked through them starting alphabeticallya is for anatomy, b is for blockchain, c is for compiler and so forth. A few of the many highlights included:

  • Audiospeech synthesis, speech recognition and audio recognition with built-in superfunctions built on top of pre-trained neural networks, a way to rapidly prototype and speed up development
  • Axiom systemsthe Wolfram Language has always been the best at highly advanced mathematical functionality, and Stephen introduced a new way to computationally examine and analyze logic axiom systems (including his own discovery of the simplest axiom system), along with automatic theorem proving
  • BlockchainStephen introduced more functionality for building and executing computational contracts, and shared some recent developments in Wolfram Blockchain Labs; he also talked about how companies using Wolfram|Alpha have the best oracle to verify conditions under which smart contracts are executed (you can watch Stephen explain the future of smart contracts here)
  • Compilerperhaps one of the most anticipated developments in the Wolfram Language, the compiler increases speed and efficiency by orders of magnitude, and will undoubtedly change the way individuals and organizations develop applications with the Wolfram tech stack
  • Databasesanother development Stephen introduced was increased connectivity to external databases using the Wolfram Language entity framework; Postgres, Oracle, SPARQL, S3, IPFS, a slew of SQL databases and more will now work seamlessly with built-in functions that improve data handling in the Wolfram tech stack
  • Externalswe also saw new connectivity with ZeroMQ, connectivity to Jupyter notebooks (for those who choose to use a less sophisticated notebook interface for exploratory work) and new functions for directly executing commands on the web
  • Facial featuresanother set of superfunctions with artificial intelligence built directly into them; age, gender and emotion of facial images can now be determined with the same ease of use as ImageIdentify
  • Geocomputationnew functions for geopositions and sophisticated computations with vector operations that build on an already impressive set of functions for geographic visualizations and applications
  • Geometryafter 2,000 years, Platonic solids are finally computable in the Wolfram Language, along with new ways to compute with Euclidean geometry, polyhedra and synthetic geometry
  • Neural networkswith the launch of our Neural Net Repository, new functionality keeps coming out of R&D and experimentation, such as a more efficient framework, support for multiple GPUs, support for Tensor Core and more

The upcoming release of Version 12 will bring with it not only new functions, but also improvements to interfaces, interoperability with other programming languages and core language efficiencies. If you’re interested in seeing Version 12 being designed and built firsthand, be sure to watch Stephen’s “Live CEOing” series of livestreams.

A New Kind of eSport with Livecoding

Livecoding championship

For the second year, Wolfram hosted and livestreamed a livecoding championship where our internal experts and conference guests competed to see who had the best Wolfram Language chops. To be hosted annually, the competition is a fun way to unwind after a day full of talks and an evening of networking. Each contestant is given a coding challenge, and the first to accurately solve the problem is awarded points. Challenges utilize the full range of capabilities in the Wolfram Language, including built-in data, geometric computations and even data science. It was impressive to see how quickly a complicated problem could be solved.

Livecoding championship winner

This year, our winner was Chip Hurst, a Wolfram expert who is currently involved in cutting-edge developments in 3D printing in biotech applications. Congratulations, Chip!

Wolfram Innovator Award Winners

Wolfram Innovator Award winners

Each year at the Technology Conference, Wolfram recognizes outstanding individuals whose work exemplifies excellence in their fields. Stephen recognized eight individuals this year, from educators to engineers to computational mathematicians. This year, Wolfram honored:

  • Abby Brown, a professor of mathematics at Torrey Pines High School, where she develops innovative ways to get her students interested in STEM through 3D printing, mathematics clubs and lesson plans that incorporate artificial intelligence with Wolfram technologies
  • Bruce Colletti, a retired United States Air Force major and defense contractor whose work uses Wolfram technology for high-level commercial, academic and government projects focused on operations, logistics, program evaluation and homeland security
  • David Creech, the principal engineer at McDermott, who leads the development of new systems and processes culminating in hundreds of thousands of lines of Wolfram Language code and thousands of pages of documentation
  • Nicholas Mecholsky, a research scientist at the Vitreous State Laboratory and adjunct assistant professor at the Catholic University of America whose work uses Mathematica to model large-scale chemical processes that increase the safety of nuclear waste storage
  • Jorge Ramirez, an applied mathematician at Universidad Nacional de Colombia Sede Medell?n whose work spans natural and biological sciences and includes innovations in education delivery using Mathematica
  • Aaron Santos, a data science supervisor at EMC Insurance whose work uses Wolfram technology for rapid prototyping, innovative Internet of Things measurements and multiparadigm data science to develop innovative solutions to complex problems
  • Neil Singer, the president of AC Kinetics, Inc., where he uses Wolfram SystemModeler for advanced simulation of digital motor controllers
  • Nassim Nicholas Taleb, a distinguished professor of risk engineering at the New York University Tandon School of Engineering and author of the multivolume essay Incerto whose work examines risk, probability and computational preasymptotics (a field that is often ignored)

That’s a Wrap!

We’ll be back with a post about the winners of our annual one-liner competition. This year’s conference was another success for the books, and we look forward to seeing everyone back next year!

 (0)


 
 RSS- () — RSSfeedReader
: 10
:

Computational Thinking (5)
Current Events (2)
Data Repository (1)
Digital Humanities (2)
Education (3)
Events (4)
History (1)
Machine Learning (1)
Mathematics (2)
Other Application Areas (1)
Recreational Computation (2)
Software Development (1)
Wolfram Language (2)
Wolfram News (5)
Wolfram|Alpha (1)
:

2018-12-13, . (1)
2018-12-07, . (1)
2018-11-30, . (1)
2018-11-21, . (1)
2018-11-16, . (1)
2018-11-14, . (1)
2018-11-09, . (1)
2018-11-01, . (1)
2018-10-25, . (1)
2018-10-23, . (1)
:

Brian Wood (1)
Christopher Carlson (1)
Jamie Peterson (1)
Jesika Brooks (2)
Jesse Friedman (1)
Michael Trott (1)
Parik Kapadia (1)
Swede White (1)
Tuseeta Banerjee (1)