Happy Hour of Code! There’s no better reason to start learning or continue honing your programming skills than the Hour of Code, an annual celebration of computer science during Computer Science Education Week. While we like to think that every hour is a great hour to code, we look forward to the Hour of Code event as an opportunity to come together and share some of our best Wolfram Language resources for students. Since its 2013 launch, the Hour of Code has been an immense success, introducing valuable programming skills to millions of students. So with this year’s Hour already underway, let’s take a look at the ways you can get started!
This year we introduced Wolfram|Alpha Notebook Edition, created with students in mind. It combines the power of Mathematica and the Wolfram Language with the ease of Wolfram|Alpha. Wolfram|Alpha Notebook Edition encourages students to learn as they dynamically work through computations and natural language queries directly within a notebook environment:
Many students are familiar with the step-by-step solutions Wolfram|Alpha offers. Wolfram|Alpha Notebook Edition goes even further by helping you learn the code behind the natural language computations:
For those who want to start programming with the Wolfram Language, Wolfram Programming Lab is a great tool for beginners. You can get started with our open collection of sample Explorations, which range in skill level from starter and basic to advanced.
To keep exploring with your own Wolfram Language code, head over to the Wolfram Cloud for its full-scale notebook environment—free with a Cloud Basic account. Test your skills with Wolfram Challenges, which you can work through and submit in the cloud. Try to get your program on the leaderboard!
To keep coding well past the Hour, join us at the Wolfram High School Summer Camp! A project-oriented camp specifically designed for high-school students, the Summer Camp is a unique opportunity to learn cutting-edge programming, computational thinking and innovative technology. For continuing education and live training, you can also find our upcoming events on the Wolfram U calendar—with a variety of online courses for students, researchers and educators alike. We also have some exciting updates coming to our Computational Thinking Initiatives in early 2020 that we cannot wait to share with you. We appreciate your participation in the Hour of Code, and look forward to seeing what the next generation of coders creates!
Much effort and money are spent trying to analyze whether political messages resonate with the electorate. With the UK in its final days before a general election, I thought I would see if I could gain such insight with minimal effort.
My approach is simple: track the sentiment of tweets that mention each party. Since the Wolfram Language has a built-in sentiment classifier and connections to external services, we can analyze these messages with only a few lines of code.
Setting Up the Data
First I need to set up a database to store the accumulated data in the cloud. Our Wolfram Data Drop service takes care of this automatically. All I have to do is execute the following code:
#10005
CreateDatabin[Permissions - "Private"]
Now that I have somewhere to store the data, I want to set up an hourly service in the Wolfram Cloud that will log the sentiment at that moment. The process is to connect to Twitter and fetch up to five hundred tweets about each party. We then clean the tweets up to remove non-words and Twitter syntax, and score them using the off-the-shelf sentiment analyser. Finally, we write the numbers into the database:
#10005
With[{
databin = Databin["IDVxLzdw"],
tweetClean =
Function[{tweet}, ToLowerCase[ImportString[StringReplace[
tweet, {
StartOfString ~~ "RT " - "",(*remove the retweet label*)
"@" ~~ Shortest[__] ~~
WhitespaceCharacter | EndOfString -
" ",(*remove user names*)
"http:" ~~ Shortest[__] ~~
WhitespaceCharacter | EndOfString -
" ",(*delete hyperlinks*)
"https:" ~~ Shortest[__] ~~
WhitespaceCharacter | EndOfString - " ",
"#" - "",(*turn hashtags into words*)
"\n" - " "(*remove line breaks*)
},
IgnoreCase - True], "HTML"]]]},
With[{meanSentiment = Function[{tweets},
Mean[
ReplaceAll[
Classify["Sentiment",
Select[
Normal[
tweets[DeleteDuplicates, tweetClean[
Slot["Text"]] ]], StringQ]], {
Alternatives["Neutral", Indeterminate] - 0., "Positive" - 1.,
"Negative" - -1.}]]]},
CloudDeploy[
ScheduledTask[
Block[{twitter, conservativeTweets, labourTweets,
conservativeSentiment, labourSentiment},
twitter = ServiceConnect[
"Twitter"]; conservativeTweets = twitter[
"TweetSearch", "Query" - "conservative",
MaxItems - 500]; labourTweets = twitter[
"TweetSearch", "Query" - "labour",
MaxItems - 500]; ServiceDisconnect[
twitter]; conservativeSentiment = meanSentiment[
conservativeTweets]; labourSentiment = meanSentiment[
labourTweets]; If[
And[
NumberQ[conservativeSentiment],
NumberQ[labourSentiment]],
DatabinAdd[databin,
Association[
"Conservative" - conservativeSentiment,
"Labour" - labourSentiment]]]], "Hourly"]]]]
Having executed that code once, I now wait while the data accumulates. Fortunately, UK elections are brief and fairly intense affairs that are over in a month.
A Brief UK Election Primer
For the non-British, here is a quick primer on UK politics. There are two main parties: the Conservatives (who are more right wing, with a preference for free markets, low taxes and smaller government) and Labour (who are more socialist, with a preference for more government spending and more state control). Americans should probably think of Conservatives as equivalent to Republicans and Labour as Democrats; however, UK politics are generally more socialist than the US, but more right wing than much of Europe. While a third party, the Liberal Democrats, receives significant votes, due to the nature of our voting system it achieves few members of Parliament. There are a number of other parties mostly focused on regional issues that take a handful of seats in Parliament. To make my task more complicated, the Conservative Party is also known as the Tory Party (more often by their opponents who use the term pejoratively). I have ignored all these complexities by just looking for tweets with the two main official party names.
Sentiment Analysis
By now, my code has been in the cloud for a few weeks—let’s see what we collected:
#10005
Dataset[Databin["IDVxLzdw"]]
The results are easier to work with if we fetch the data as a set of TimeSeries:
#10005
TimeSeries[Databin["IDVxLzdw"]]
The first thing we can see is that most of the time, the average sentiment for both parties is negative. I don’t know if that is telling us something about the nature of discourse on Twitter or about the public’s opinion of its politicians:
#10005
DateListPlot[data, PlotStyle - {Red, Blue}]
The data is pretty noisy. If I were actually running a party’s campaign, I would want really up-to-date metrics. But for a high-level overview, I am going to smooth that with a moving average of 12 hours:
#10005
DateListPlot[MovingMap[Mean, #, Quantity[12, "Hours"]] /@ data,
PlotStyle - {Red, Blue}, BaseStyle - 16]
The first observation is that sentiment is generally slightly more negative about the Conservatives (who are the incumbent government). It is interesting to see that there are times when sentiment becomes more positive for both parties, and other times where the sentiment moves in opposite directions.
If we want to see the relative movement, we can simply find the difference between the sentiments expressed about the two parties:
#10005
trendTS = #Conservative - #Labour [
MovingMap[Mean, #, Quantity[12, "Hours"]] /@ data]; trendPlot =
DateListPlot[trendTS,
FrameTicks - {{Automatic, {{-0.1, "Labour"}, {0.05,
"Conservative"}}}, {Automatic, None}}, BaseStyle - 16]
So the obvious question is, what happened to cause the peaks and troughs of this curve? It does look a bit like Labour generally does better on weekends.
Headlines as Context
In a short election cycle, the narrative moves quickly, but I went through an archive of the newspaper front pages to see what the dominant story was at various times and annotated the plot:
#10005
Show[trendPlot,
DateListPlot[Callout[{First[#], trendTS[First[#]]}, Last[#]] /@
{
{"07:00, 16 Nov 2019", "Labour to\nnationalise broadband"},
{"07:00, 17 Nov 2019", "Prince Andrew\nInterview"},
{"20:00, 19 Nov 2019", "TV Debate"},
{"07:00, 22 Nov 2019", "Labour to\nspend ?80bn"},
{"07:00, 25 Nov 2019", "Conservatives to\nhire 50k nurses"},
{"07:00, 28 Nov 2019", "Labour claim\nNHS to be sold"},
{"17:00, 29 November 2019", "Terrorist attack"},
{"20:00, 1 Dec 2019", "TV Debate"},
{"07:00, 3 Dec 2019", "NHS dossier\nRussia link"},
{"07:00, 4 Dec 2019", "Trump comments\non NHS"},
{"21:00, 6 December 2019", "TV Debate"},
{"07:00, 9 December 2019", "Brexit\nleak"},
{"12:00, 9 December 2019", "NHS photo gaffe"}
}, Joined -> False]
]
The only general conclusion that I can draw is that when a party announces a policy, Twitter reacts negatively. The depressing conclusion would be that politicians should avoid policies and stick to slogans.
Here are some interpretation notes on the key recent events in the UK:
November 16: Labour announces a plan to nationalize British Telecom and offer free broadband. Nationalizing private industries hasn t happened in the UK since the 1970s.
November 17: The country is captivated for several days by Prince Andrew s ill-judged interview about his friendship with Jeffrey Epstein. As well as considering the interview to be a distraction from politics, Labour supporters are more likely to be anti-royal and also anti-billionaire.
November 22: Labour plans to spend an extra 88 billion on all kinds of public services. This is a lot by UK standards, even for Labour governments.
November 28: Labour claims a leaked document proves that the Conservatives plan to sell the National Health System (NHS) to America. The NHS is a hot-button issue in the UK and always an important element of political debate.
November 29: A recently released convicted terrorist kills two people in London. Initially, Labour focuses the blame on the Conservative government having reduced police funding, but gradually the story switches to decisions made by the previous Labour government that caused the earlier release of the culprit.
December 3: Claims are made that Labour s document on the NHS was supplied by Russia trying to influence the election.
December 4: Donald Trump says that he isn t interested in the NHS.
December 9: Leaked documents show that the Conservatives plan for Brexit will be difficult to implement quickly. Brexit (the departure of the UK from the European Union) has dominated UK politics for several years and is a central question for this election.
December 9: During an interview, Conservative leader Boris Johnson refuses to look at a photo of a child sleeping on a hospital floor.
While the Wolfram Language makes it easy to collect, score and visualize the data from Twitter, interpretation remains tricky. Some days with interesting sentiment data do not appear to be correlated to strong headline stories in the newspapers, making it difficult to understand what has driven the sentiment. In hindsight, I should have put all of the Twitter data into the databin so that I could have analyzed content as well as sentiment. It does appear that policy announcements drive negative Twitter sentiment against the party making the announcement, which perhaps explains why so much of politics is about slogans rather than policies. One thing we can be certain about, however the election turns out: these peaks and troughs of sentiment will continue well into the future.
Get full access to the latest Wolfram Language functionality with a Mathematica 12 or Wolfram|One trial.
Engage with the code in this post by downloading the Wolfram Notebook
?
Êîììåíòàðèè (1)
This year’s Wolfram Technology Conference was host to the eighth annual One-Liner Competition, an event where attendees show us the most astounding things they can accomplish with 128 or fewer characters of Wolfram Language code. Submissions included games, card tricks and yoga exercises, all implemented with less than one tweet’s worth of the Wolfram Language.
Honorable Mention
Marco Thiel: Word Relay (128 characters)
Marco took advantage of inaccuracies in SpeechSynthesize and SpeechRecognize to create a machine version of the telephone game (or “ Chinese whispers”), where a phrase is whispered from person to person and morphs along the way, perhaps into something completely unrelated to what was first spoken. It’s a faithful reproduction of the game, starting with a phrase spoken by the user and producing a sequence of audio outputs along the way:
The first output is a recording of me saying “tomato,” and the subsequent ones this sequence of inaccurate repetitions: “tomato-eh,” “tomahto-wee,” “tomato-wee,” “tumito-wee,” “turn-to-wee,” “turn-to-wee,” “turn-to-wee,” “turn-to-wee,” “turn-to-me,” “to-to-me.”
The judges really loved the idea and the concise implementation, but it was somewhat finicky and not all of them could get a satisfactory result—which may be something for Wolfram developers to look into.
Honorable Mention
Stella Maymin: Magic Trick (127 characters)
This was the first-ever telepathic code entry in the One-Liner Competition. All you have to do is think of a card, and then when you press the “It is gone!” button, your card disappears! The code reads your mind!
✕
g = ResourceFunction[
"PlayingCardGraphic"]; TabView[{"Just THINK of a card" ->
g[{11, 25, 37, 51, 26}], "It is gone!" -> g[{24, 12, 50, 25}]}]
Unfortunately there appears to be a bug in the code. If you picked the queen of hearts, it would not have disappeared. Nevertheless, the judges thought this entry was a great idea, and nicely executed.
Honorable/Dishonorable Mention
George Varnavides: Oboe Dragon (Curve) (128 characters)
This sonification of the dragon curve was the background music for the entire One-Liner judging session when the judges could not figure out how to turn it off and listened to its full 25 minutes. The judges loved the idea and the effect… for the first minute or two.
What the judges discovered too late was that playing any other sound would abort the one in the queue. For example:
✕
EmitSound[SoundNote[]]
Third Place
Stella Maymin: Face Eraser (90 characters)
Stella’s third-place Face Eraser nicely uses a function that many Wolfram Language users may not know about: Inpaint. The combination of Inpaint with FindFaces to insert a blank where someone’s face ought to be yields creepy images appropriate to the Halloween timing of our technology conference:
✕
c = CurrentImage[]; Inpaint[c,
Binarize@HighlightImage[c, {{"Blur", 999}, FindFaces@c}, "Remove"]]
Second Place
Amina Matt: Morning Yoga Routine (128 characters)
You can’t get the full effect of Amina’s yoga routine without actually evaluating the code. The output not only shows you the lotus pose but also gives an audio description of how to execute it, accompanied by a calming birdsong background:
✕
EmitSound@ExampleData@{"Audio", "Bird"}~Do~11
Speak[n = "LotusPose"]
t = "YogaPose"~Entity~n
Speak@t@"Description"
Show@t@"Schematic"
What a great combination of elements! I challenge anyone to produce the equivalent in any other language with 10 times as many characters.
The judges said this would be a killer app had it produced random yoga poses, but recognized that Amina was right up against the 128-character limit, and that not all "YogaPose" entities have the required description.
First Place
Philip Maymin: Memory Game (127 characters)
Philip’s first-place entry is a clever adaptation of the conventional memory game to make it work as a One-Liner. Instead of matching pairs of like items, your task is to reveal the numbers 1 through 16 in sequential order:
✕
k = l = 0; s = Partition[RandomSample@Range@16, 4]; Dynamic@
If[16 == l, k,
Grid[Map[If[# <= l, #,
Button[\[SixPointedStar], Speak@#; ++k; If[l + 1 == #, ++l]]] &,
s, {2}]]]
When you click an asterisk, you hear the number it hides spoken, and must remember what was spoken where in order to complete the game, trying to use the smallest number of clicks to do so:
When you’ve revealed all the numbers, the game board is replaced by your score, which is the number of clicks you used:
With just 127 characters, Philip has created a polished, interactive and challenging game that’s actually fun to play. Nice work!
The judges commented that it would have been really nice to show the score as you are playing. You did have one character left, Philip ;)
I’d like to point out that fully half of this year’s honorable mentions and prizes went to the father–daughter pair of Philip and Stella Maymin, a first in the history of the One-Liner Competitions.
There were 22 entries to this year’s contest, and many beyond the ones I could present here that merit a look. You can see all of the submissions in this notebook. Hope to see you at the One-Liner Competition next year!
The latest Wolfram technology stack makes it possible for you to develop and deploy useful applications in minutes. Start coding today with a Wolfram|One trial.
It’s been another big year of exploration with the Wolfram Language. CEO Stephen Wolfram’s new book takes us on a tour of his computational adventures throughout the years. We’re also excited to introduce a Spanish-language version of the popular An Elementary Introduction to the Wolfram Language, as well as books to enhance the mathematics and engineering curricula. There’s something new for everyone, from students to lifelong adventurers, to discover with the Wolfram Language. Just in time for the holidays, find the perfect read for those who love learning new things—including yourself!
Join Stephen Wolfram as he brings the reader along on some of his most surprising and engaging intellectual adventures, showcasing his own signature way of thinking about an impressive range of subjects. From science consulting for a Hollywood movie, solving problems of AI ethics, hunting for the source of an unusual polyhedron, communicating with extraterrestrials, to finding the fundamental theory of physics and exploring the digits of pi, this lively book of essays captures the infectious energy and curiosity of one of the great pioneers of the computational world.
?Ahora en espa?ol por primera vez! Stephen Wolfram’s introduction to modern computational thinking is now available in Spanish, bringing the leading-edge computation of the Wolfram Language to an even larger audience. The Wolfram Language scales from a single line of easy-to-understand, interactive code to million-line production systems. The book does not assume any programming knowledge, making it suitable for students of all levels, as well as anyone interested in learning the latest technology and the world’s only large-scale computational language. You can also find the second edition of the English version here.
Look inside the machines that power our civilization and provide innumerable goods and services, and you will find mechanisms at the core. This book is designed to supplement the existing curriculum and lab work by students. Building on the foundations from the classroom, M?quinas y mecanismos, implementaci?n con Wolfram Mathematica enhances this branch of engineering for both sides of the instructor’s desk. Students engage more deeply with the material; examples and small Wolfram Language programs embedded within the text encourage the dynamic exploration of the behavior of any linkage. The Manipulate function in the Wolfram Language allows students to easily examine effects produced by variations in the lengths of the links or their relationships, the linear and angular speeds and accelerations of the input link, and the inertial parameters in the behavior of a mechanism.
Mathematical Analysis and Instrumental Methods for Solving Problems (Russian)
Compiled into two books, materials from the Mathematical Analysis course by authors Vladimir Grigorievich Chirsky and Kirill Yuryevich Shilin bring theoretical mathematics closer to instrumental methods. Sit in on the first and second semesters of the course (originally taught at the Department of Economics of the Institute of Economics, Mathematics and Information Technology, RANEPA), with each book chapter thoughtfully divided into four sections: 1) basic definitions, statement of theorems and simple proofs; 2) evidence of theorems and more in-depth readings; 3) solutions to common problems; and 4) how to solve the problems using Mathematica.
Discrete mathematics is as essential for a programmer as an applied mathematician, but many students struggle with the subject. Discrete Mathematics: Learning to Program in Wolfram Mathematica combines an introduction to the main concepts and methods of discrete mathematics with the basics of programming in Mathematica. This book takes you through a range of theoretical and applied discrete mathematics, including combinatorics and enumerative combinatorics, data structures such as binary heaps and binary search trees, sorting algorithms, and operations in residue rings and modern encryption methods. Twelve Mathematica “programming lessons” show the actual code to implement all the algorithms introduced throughout the chapters, making this book a valuable resource for students, university professors and anyone interested in learning how to program in Mathematica.
This compact, new edition of The Student’s Introduction to Mathematica and the Wolfram Language is an excellent companion to the standard mathematics curriculum. A brief introduction to the aspects of the Mathematica software program most useful to students, this third edition includes significant updated material to account for the latest developments in the Wolfram Language. As a result, this book also serves as a great tutorial for former students looking to learn about new features such as natural language queries, 3D printing and computational geometry, as well as the vast stores of real-world data now integrated through the cloud.
It’s rare to hear polygons mentioned in a physics class, even in higher education. This may seem unexpected given the fundamental role they play in mathematics. However, over the last few years, polygons have come to the front line in many areas of theoretical physics, helping us understand the laws of nature with their astonishing beauty.
This is particularly true in the field of particle physics, where a new geometrical object has been found to be connected to particle dynamics: the amplituhedron. It represents a novelty not only in physics but also in mathematics, generalizing the concept of a convex polygon. In this blog post, I will first discuss its relation to particle physics, and then how to visualize its geometry using the Wolfram Language.
A Little about Me
I’m a PhD student in theoretical physics at Durham University, United Kingdom. I was born in Venice, Italy, and I did my bachelor’s and master’s degrees in physics in the city of Trieste. After earning my degree, I was lucky enough to get into a wonderful PhD program (ITN) called SAGEX, which is funded by the European Union. We are a group of 15 early-stage researchers and as many supervisors distributed among 8 different academic institutions around Europe. The purposes of the project are to investigate the geometric structure hidden in the laws of particle dynamics and spread the word about all the amazing new discoveries in the field, like the amplituhedron.
As part of my SAGEX training, I’m spending three months as a visiting scholar at Wolfram Research in Champaign, Illinois. During my time here, I’ve started discovering all the features that the Wolfram Language offers to deal with many different geometries and represent them graphically. I spent a year or so working on the amplituhedron, and had a lot of fun creating a series of sketched drawings while trying to get some intuition about this funny object. Now that my experience at Wolfram is coming to an end, I think the time has come to transform those wrinkled leaves into some incredible, colorful pictures using the Wolfram Language.
Scattering Amplitudes
Our knowledge of elementary particles and their nature is almost entirely based on scattering experiments. Every day, in many laboratories around the world, particles such as electrons and protons are accelerated to extremely high velocity and crashed into one another. After the collision, the kinetic energy of crashing particles is converted into new particles. These new particles will then scatter in all directions, eventually hitting a detector where their velocity, charge and mass are recorded.
So, here’s the big question theoretical physicists are trying to answer: If I shoot two protons into one another, which type of particles might appear? In which direction and with what velocity? Physics is about making predictions. Trying to answer these types of questions corresponds to investigating the fundamental nature of particle interaction.
It just so happens that being able to give exact answers to this question is almost impossible. However, we can formulate approximate solutions with astonishing precision. The theory behind this representation of particle dynamics is called perturbative quantum field theory (QFT), and it was mainly the result of the work of S. Tomonaga, J. Schwinger and R. Feynman, who together won the Nobel Prize in Physics in 1965. Many mathematical steps are needed to carry out these calculations and then compare them with the data coming from the collision experiments.
At the core of this sophisticated procedure, there is a graph theory/combinatorial problem that is usually indicated as “the sum over all possible Feynman diagrams.” The result of this calculation is called a scattering amplitude, or sometimes just amplitude. Most of the time this step represents a bottleneck for the whole calculation. This is because its complexity grows factorially with the number of particles involved in the scattering and the precision we want to obtain. Increasing these two parameters quickly makes the computation impossible even for supercomputers.
From Geometry to Amplitudes
In the last 20 years, an enormous amount of progress has been made on the study of amplitudes. It’s a story in which the main character is the gluon, the mediator of the nuclear force. Here, I would like to highlight two remarkable discoveries, both from 2009, that are directly responsible for the discovery of the amplituhedron. The first is that a particular class of gluon amplitude can be thought of as a volume of a polytope. The second one is that all amplitudes in a particular theory, called planar N = 4 SYM, are strictly connected to the Grassmannian, which is a space of hyperplanes. In December 2013, N. Arkani-Hamed and J. Trnka published “The Amplituhedron and were able to make a connection between these two amazing facts, opening new perspectives and puzzles. You can read more about this in J. Bourjaily and H. Thomas s article, What Is the Amplituhedron? .
The amplituhedron is a very general geometric object that can appear in many forms. There are three important parameters that define the amplituhedron, usually denoted , and . is the dimension in which the amplituhedron lives. is the number of points that we use to build it. This parameter physically corresponds to the number of particles participating in the scattering process. The last one, , is more subtle both geometrically and physically. For , the amplituhedron is a four-dimensional polytope, the generalization of a two-dimensional polygon or three-dimensional polyhedron to higher dimensions.
A polygon can be thought of as the set of points trapped inside a curve given by many segments. As I will try to show you, the amplituhedron for generalizes this idea to hyperplanes. For example, for , the amplituhedron will be given by a set of lines trapped inside an edge curve. Physically, the parameter roughly indicates the type of particles scattering.
The main goal of this blog post is to focus on the case and use the Wolfram Language to create a visual representation of this weird set of lines, and also to show in which sense it represents the natural generalization of a polygon.
The amplituhedron has a stunning compact definition that can be given in one line, but to understand it we need to be able to read the language of projective geometry.
A Taste of Projective Geometry
Projective geometry is an incredibly powerful tool to map complex geometric problems to elementary linear algebra problems. The basic idea is that I can represent points in a plane as lines passing though the origin in three dimensions:
As you can see in the image, this correspondence can be built by fixing a plane and considering the intersection between the plane and the homogenous lines. In general, two points are needed to identify a line but, since each line is passing though the origin, one point is enough to determine it. In this image, the points are labeled by the . You will notice that, if I multiply the coordinates of one of the ’s by a constant, the effect will be to move it along the line it represents. So for example, both and will be equivalent because they represent the same line. This is in fact the formal way in which the projective space is defined. The two-dimensional projective space —that is, the one represented in the picture—is defined as the set of points in three dimensions identified by the equivalence relation .
OK, now that we have this fancy way to think about points, what we can do with it? First of all, notice that depending on which plane we project, the distances between points can change. The idea is that we would like to just work with lines in three dimensions, not dependent on the plane we are projecting on.
Determinants Are All You Need
What are the kinds of questions we can ask that do not depend on the specific projective plane we choose? There is a question that is particularly easy to answer in this setup: are three points aligned? In fact, if three points are aligned, the three vectors representing them will all be contained in a plane. This means that they will be linearly dependent, and therefore the determinant will be 0.
What if the determinant is different from 0? Since the are projective, we can’t say much. In fact, sending to changes the sign of the determinant from positive to negative. One can decide, therefore, to be more restrictive and consider half-lines instead of lines. This amounts to restricting the equivalence relation to have . In this way, the sign of the determinant becomes invariant under the rescaling and becomes something we can meaningfully talk about.
There is a simple trick in three dimensions to understand if a determinant is positive or negative: the right-hand rule. Using this rule, you can easily see that the determinant will be negative while the determinant will be positive. You can see this for yourself by playing with it, saying that point is on the right side of the line , which is the same as saying that . Beware, though! The notion of right and left changes if you are watching the projective plane from below or from above, in the same way that your right hand corresponds to the left hand of your mirror image.
Inside a Triangle
We will start by exploring the first way in which the amplituhedron can appear: the convex polygon. It’s in this projective formulation that A. Hodges in 2009 recognized that a particular gluon scattering amplitude was indeed the volume of a polytope in . Unfortunately, we cannot see in four dimensions, so we will stick to without actually losing much of the general picture. Let’s start from the simplest polygon, the triangle.
Suppose I give you three vertices , and I ask you to parametrize the space of all points inside the triangle. It seems like a simple yet very annoying task, doesn’t it? But there is an incredibly efficient way to do it!
Using physics to get some intuition, we can consider that each of these objects has a mass, and we want to calculate their common center of mass. The center of mass will correspond to the weighted average:
#10005
A = (\!\(\*SubscriptBox[\(m\),\(1\)]\)\!\(\*SubscriptBox[\(P\),\(1\)]\)+\!\(\*SubscriptBox[\(m\),\(2\)]\)\!\(\*SubscriptBox[\(P\),\(2\)]\)+\!\(\*SubscriptBox[\(m\),\(3\)]\)\!\(\*SubscriptBox[\(P\),\(3\)]\))/(\!\(\*SubscriptBox[\(m\),\(1\)]\)+\!\(\*SubscriptBox[\(m\),\(2\)]\)+\!\(\*SubscriptBox[\(m\),\(3\)]\));
where the mass parameters are clearly positive. The idea is that the center of mass will always lie somewhere among the three points depending on the values of the individual masses. We could be satisfied with this result, but we can actually do better. Let’s look at our triangle projectively. This time we will try to perform some visualization, so let’s choose some coordinates for the points :
#10005
P = 2 Table[RandomReal[], 3, 2];
You can confirm that the type of logic we have used in two dimensions is also valid for points in three dimensions. We can construct these three-dimensional points out of the two-dimensional ones just by adding a new unit coordinate to all our points. We will call the new three-dimensional points newP.
Consider now the point , which can be written in the Wolfram Language as:
#10005
M = {\!\(\*SubscriptBox[\(m\),\(1\)]\),\!\(\*SubscriptBox[\(m\),\(2\)]\),\!\(\*SubscriptBox[\(m\),\(3\)]\)}
A = M.newP
The point is not exactly on the triangle defined by the points , because of the factor. But here comes the change of perspective. Let’s think of all our points as half-lines (or oriented lines) passing though the origin:
If we rescale our points by a factor , they will always represent the same line. This means that we can rescale so that it lies on the triangle—or even better, just invoke the equivalence relation .
We can summarize by saying that the inside of a triangle with vertices can be parametrized with the map .
Inside a Polygon
The beauty of this projective construction of the inside of a triangle is that it’s generalized in a straightforward way to polygons. Suppose this time instead of three points, we have five points . Following the center of mass argument and knowing that can be rescaled, we can immediately say that the inside of five points will be given by:
#10005
A=\!\(\*SubscriptBox[\(m\),\(1\)]\)\!\(\*SubscriptBox[\(P\),\(1\)]\)+\!\(\*SubscriptBox[\(m\),\(2\)]\)\!\(\*SubscriptBox[\(P\),\(2\)]\)+\!\(\*SubscriptBox[\(m\),\(3\)]\)\!\(\*SubscriptBox[\(P\),\(3\)]\)+\!\(\*SubscriptBox[\(m\),\(4\)]\)\!\(\*SubscriptBox[\(P\),\(4\)]\)+\!\(\*SubscriptBox[\(m\),\(5\)]\)\!\(\*SubscriptBox[\(P\),\(5\)]\)
Great, but this time there is a new element coming into play. Points can be arranged in different ways!
You can see that five points define a pentagon only if they are distributed in a specific way. So how can I be sure that the points I have chosen form a convex polygon? You can see that in the case of a convex polygon, if I take a line along one of the edges, like the line , all the other points will be on the same side of the line. This is not true for the line in the first image. But we know how to express this concept mathematically: the concept of being on the right side of a line using the determinant! So, if we have n ordered points and we want to use them to represent a convex polygon like the one in the image, we must impose that for all If one considers the matrix , where is the number of points and is the dimension of the embedding space, the convexity condition is equivalent to saying that all the ordered minors of the maxtrix must be positive:
In this description of the inside of a polygon, you can see that there is a pairing of two positive spaces: the space of the mass-parameters vector , where ; and the convexity of the space of points, where Minors and stands for the dimension of the embedding space where our point lives. In the case of polygons, . In the next section, we will finally see the true novelty introduced by the amplituhedron in geometry: the Grassmannian polytope.
Inside an Amplituhedron
I previously pointed out that a polygon can be described as the set of points that are on the right side of a bunch of lines. What if instead of points, we would like to describe a space made of lines? Can we come up with some similar concept? The amplituhedron is exactly the answer to this question and generalizes this idea not only to lines but also to any hyperplane in any dimension. We will focus now on the space of lines in , known as the Grassmannian , and here we will build the amplituhedron.
First, how can we mathematically describe a line in projective space? Well, one line is identified by two points. So instead of having just , I can think of it as having two points, and . If I just rewrite the definition of a polytope and promote the parameters vector to a matrix, I get:
#10005
({
{A},
{B}
}) = M.P
Now I have to find the inequalities defining the amplituhedron. If I see as a matrix, I can rephrase the polygon positivity constraint by saying that all the ordered minors of are positive. You can see now that this definition naturally generalizes to matrices! Using this definition, the amplituhedron will be given by the pair of points parametrized by the formula , where all the ordered minors of both and are positive.
What does this space of lines look like? There are various ways to represent this space, as you will see. First of all, we have to fix some coordinates. It’s true that two points identify a line, but this is a redundant description. In fact, I can slide the points along the line, and they will keep describing the same line. In the embedding space, the points and are two vectors that span a plane. In other words, if is a matrix with and will identify the same line in the projective space. The determinant is fixed to be positive to preserve the orientation of the line:
Let’s look now to the specific case in which I have four points defining our amplituhedron in , meaning that the embedding space is four dimensional. First, I need the minors of to be positive. I can easily achieve this by choosing to be equal to the identity matrix. Then I have to take care of the positivity of the matrix . I can use the equivalence relation to reduce the number of parameters to four, and write the matrix as:
#10005
M = ({
{1, a, 0, -b},
{0, c, 1, d}
})
I choose this strange parametrization because its minors are particularly simple. In fact, if I calculate the minors and impose the positivity constraint, I get:
#10005
ineq = # 0 /@ Flatten[Minors[M, 2]]
Then, using Reduce to solve the inequalities, I obtain:
#10005
Reduce[ineq]
It’s a set of four inequalities as simple as it can be. To visualize this space of lines, first I have to project everything down to three dimensions. Any choice of the projective plane is valid. Here, I choose a plane orthogonal to the vector . I then sample the parameter space, generating two thousand lines. I also add in the same plot of the tetrahedron formed by the points , and we get the result:
You can see that the amplituhedron in this case corresponds to lines passing inside a tetrahedron—or a bundle of wheat, if you like. You will also notice that two faces of the tetrahedron are crossed by the lines, while the other two are not. This image is suggestive but can also be misleading. In fact, there are lines that intersect the and the faces of the amplituhedron that do not belong to the amplituhedron. To get the correct intuition, we have to look at the allowed regions for . You can see that is given by the product , which is exactly the definition we gave for the triangle. Therefore, belongs to the face of the tetrahedron, which corresponds to the red triangle in the following image. The point instead is defined as , which looks like a triangle but with a negative parameter. Again using the analogy with the center of mass, you can think of as having negative mass. Instead of dragging the center of mass toward its position, the negative term repels it toward infinity. I have represented its domain with the blue region at the bottom of the tetrahedron. One can prove that a line crossing the blue and red triangles will automatically also cross the green and yellow triangles. In fact, for different parametrizations of , we would have had belonging to the green triangle and to the yellow one.
Based on this image, we can say that the amplituhedron is the set of oriented lines intersecting the blue, green and yellow regions in this order, or any other cyclic variant of it. For example, another allowed sequence would be green, red, yellow, blue.
Last but not least, there’s still another very effective way to represent the amplituhedron. When I imagine a polygon, I usually think of its edges, and for a polyhedron I think of its faces. With respect to the description given previously, focused on the interior, this image corresponds to boundary representations. The boundaries of the space of lines defining the amplituhedron correspond to the boundary of the parameter space—that is, when one of the inequalities gets saturated. For example, you can see that if I take the limit , the point will be equal to . This means that the segment is a boundary that the line cannot cross. By representing all boundary components, I can obtain an elegant and essential visualization. When I plot the boundaries with their orientations and one element of the amplituhedron, this is the result:
This image represents the amplituhedron as the set of all oriented lines trapped inside a red polygonal spiral. One thing that can be deceptive in this representation is that it seems that the gray line can be parallel to . This is actually false. What we mean by that is actually that both lines have the same direction. But this implies that they belong to the same plane, and therefore will be equal to 0.
Finally, I would like to show you what an n-point amplituhedron looks like in three dimensions. This time, I have to be sure that all the matrix minors are positive. In order to do this, I choose to distribute the points along a spiral. Distances in this business have no meaning, so I can choose to distribute the points in any way I like as long they satisfy the convexity requirement. I tried to distribute them in the nicest way, obtaining the polyhedron you see here:
As a final step, I can highlight the boundary of the amplituhedron. The -point amplituhedron’s boundary is given by the union of the segments . There is a boundary term that is special, the segment . This segment, as we have seen for the four-point case, doesn’t go along the edge of the tetrahedron but goes along the complementary of the segment . The result:
I have smoothed down the polygonal spiral to give the illusion of an arbitrary number of points. In an analogy to the tetrahedron, we can say that the -point amplituhedron in this example is given by all the oriented lines trapped inside the spiral.
The Importance of a Picture
In this blog post, I focused on one particular type of amplituhedron that lives in , the space of lines in three projective dimensions. I have chosen this example because it’s the only amplituhedron with a physical interpretation that lives in fewer than four dimensions. One thing I really like about these spaces of lines or planes is that they are constructed using very primitive ideas. They force you to think about basic questions such as: What is a triangle? How do I usually picture it in my mind? I think these Grassmannian geometries really deserve a graphical representation capable of exposing their beauty and simplicity. It has been fun trying to achieve this for the amplituhedron, and I hope that you had fun too!
Bonus: Training for the Marathon
For those of you who were so determined to get to the bottom of this post, there are a lot other interesting ideas related to the amplituhedron that I encourage you to check out. You can see this blog post as training for this amazing marathon of lectures (you will see why it’s called a marathon) held by Nima Arkani-Hamed in June 2018 at the Center for Quantum Mathematics and Physics (QMAP) at UC Davis. In particular, the second lecture is a practical introduction to projective geometry, while the third one introduces the concept of the canonical form—the most notably absent topic in this post. The canonical form is in fact the map that connects the amplituhedron with actual amplitudes. Like the amplituhedron itself, it represents a novelty in mathematics and is a fascinating topic in its own right. Enjoy the run!
This project has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Sk?odowska-Curie grant agreement no. 764850.
Get full access to the latest Wolfram Language functionality with a Mathematica 12 or Wolfram|One trial.
Engage with the code in this post by downloading the Wolfram Notebook
?
Êîììåíòàðèè (2)
Two weeks ago, I had the pleasure of returning as a commentator for the fourth annual Livecoding Championship, a special event held during the 2019 Wolfram Technology Conference. We had such an incredible turnout this year, with 27 total participants and 14 earning at least one point! Conference attendees and Wolfram staff competed for the title of Livecoding Champion, with seven questions (plus one tiebreaker!) challenging their speed, agility and knowledge of the Wolfram Language. It was a high-spirited battle for first place, and while I had prepared “answer key” solutions in advance, I always look forward to the creativity and cleverness that competitors demonstrate in their wide range of approaches to each question.
By popular request, in addition to revisiting the questions, I’ll walk you through how competitors reached their solutions and earned their points, as a kind of “study guide” for next year’s aspiring champions. So hold on to your keyboards—we’re going in!
#videoContainer { width: 620px; height: 349px; overflow: hidden; display: inline-block; position: relative; cursor: pointer; } #videoContainer:hover img + .play { content: ''; border-left: 80px solid #fff; position: absolute; top: 0; left: 0; right: 0; bottom: 0; display: block; margin: auto; width: 0; height: 0; border-top: 40px solid transparent; border-bottom: 40px solid transparent; opacity: .8; cursor: pointer; }
Follow along with these questions by watching this recorded video of the 2019 Livecoding Championship livestream!
Question 1: Symbol Eponyms
.product-details { background: #efefef; padding: 10px 20px 0;}
Find the 10 people with the most Wolfram Language symbols named after them (eponyms), and return the most common country of birth among them as a "Country" entity.
An eponym is a “namesake”—the person (or other entity) after which a thing is named. The Wolfram Language contains many eponymic functions and other symbols, particularly in mathematical domains (Fibonacci, EulerGamma, DelaunayMesh, etc.), and this data is accessible from the Wolfram Knowledgebase through the "EponymousPeople" property of "WolframLanguageSymbol" entities. For example:
#10005
Entity["WolframLanguageSymbol", "Fibonacci"]["EponymousPeople"]
This returns a list of "Person" entities. The birthplace of a "Person" entity can be queried with the "BirthPlace" property:
#10005
%[[1]]["BirthPlace"]
In this case, the birthplace is a "City" entity, the country of which can be found using the "Country" property:
#10005
%["Country"]
Since I wrote the questions, I had to solve them too! Here’s my solution to the question, using this approach:
#10005
people = Flatten@
EntityValue["WolframLanguageSymbol", "EponymousPeople"];
commonestPeople = Commonest[people, 10];
birthplaces = EntityValue[commonestPeople, "BirthPlace"];
First@Commonest@EntityValue[birthplaces, "Country"]
The first line collects the eponyms for all "WolframLanguageSymbol" entities, and the second selects the 10 most common people in that list. That’s the first part of the question, “Find the 10 people with the most Wolfram Language symbols named after them.”
Next, we query the cities of birth of each of the 10 people, get the country each city is in and extract the single most common country in this final list—Germany.
As a bonus, we can see who these 10 most eponymous people are, along with the number of symbols each has to their name:
#10005
ReverseSort@KeyTake[Counts[people], commonestPeople] // ReverseSort
And the countries of birth of each:
#10005
EntityValue[#, "Country"] /@
EntityValue[commonestPeople, "BirthPlace",
"EntityAssociation"] // Sort
Now let’s see what our contestants came up with!
First Place: Gerli (Solved in 4m 41s)
#10005
symbols = WolframLanguageData[];
people = EntityValue[symbols, "EponymousPeople"];
places = EntityValue[
SortBy[Tally[Flatten[people]], Last][[-10 ;;]][[All, 1]],
"BirthPlace"];
First[Last[SortBy[Tally[EntityValue[places, "Country"]], Last]]]
The contestant going by the screen name “Gerli” would prove to be a formidable force throughout the evening. Their solution is structurally quite similar to mine, although some components are slightly different. Gerli used (in effect) EntityValue[WolframLanguageData[],"EponymousPeople"] in place of my EntityValue["WolframLangugeSymbol","EponymousPeople"]—these are essentially equivalent, so choosing one is just a matter of style. They also chose to combine Tally, SortBy[Last] and Part where I used Commonest.
Second Place: Êîììåíòàðèè (8)
My student days learning fluid dynamics were all about studying complicated equations and various methods of simplifying and manipulating these equations to get some kind of a result. Unfortunately, this left very little to the imagination when it came to getting an intuitive feel for how a fluid would behave in different situations. When I took my first experimental fluid dynamics course, I got to see how one would use different visualization techniques to understand qualitatively the behavior of the flow. These visualizations gave me a way of creatively looking at a flow, and, as an added bonus, they looked stunning. All these experiments and visualizations were being carried out inside a wind tunnel.
Creating Our Computational Wind Tunnel
Wind tunnels are devices used by experimental fluid dynamic researchers to study the effect of an object as it moves through air (or any other fluid). Since the object itself cannot move inside this tunnel, a controlled stream of air, generally produced by a powerful fan, is generated and the object is placed in the path of this stream of air. This produces the same effect as the object moving through stationary air. Such experiments are quite useful in understanding the aerodynamics of an object.
There are different kinds of wind tunnels. The simplest wind tunnel is a hollow pipe, or rectangular box. One end of this tunnel is fitted with a fan, while the other end is open. Such a tunnel is called an open-return wind tunnel. Experimentally, they are not the most efficient or reliable, but would work well if one were to build a computational wind tunnel—which is what we aim to do in this blog post. Here is the basic schematic of the wind tunnel that we will develop:
Our wind tunnel will be a 2D wind tunnel. Fluid will enter the tunnel from the left and leave from the right. The top and bottom are solid walls. I should point out that since this is a computational wind tunnel, we are afforded great flexibility in choosing what kinds of boundaries it can possess. For example, we could make it so that the left, right and bottom walls do not move but the top wall does. We would then have the popular case of flow in a box.
When one starts thinking of computational fluid dynamics, our thoughts invariably jump to the famous Navier–Stokes equations. These equations are the governing equations that dictate the behavior of fluid flow. At this point, it might seem like we are going to use the Navier–Stokes equations to help us build a wind tunnel. But as it turns out, there are other methods to study the behavior of fluid flow without solving the Navier–Stokes equations. One of those methods is called the lattice Boltzmann method (LBM).
If we were to use the Navier–Stokes equations, we would be dealing with a complicated system of partial differential equations (PDEs). To solve these numerically, we would have to employ various techniques to discretize the derivatives. Once the discretization is done, we are left with a massive system of nonlinear algebraic equations that has to be solved. This is computationally exhausting! Using the alternative approach of the LBM, we completely bypass this traditional approach. There are no systems of equations to solve in the LBM. Also, a lot of the operations (which I will describe later) are completely local operations. This makes the LBM a highly parallel method.
In a very simplistic framework, we can think of the LBM as a “bottom-up” approach. In this approach, we perform the simulations in the microscopic, or “lattice,” domain. Imagine you have a physical or macroscopic domain. If we were to zoom in on a single point in this macroscopic domain, there would be a number of particles that are interacting with each other based on some “rule” about their interaction:
For example, if two particles hit each other, how would they react or bounce off each other? These particles follow some discrete rule. Now, if we were to let these particles evolve over time based on these rules (you can see how this is closely related to cellular automata) and take averages, then these averages could be used to describe certain macroscopic quantities. As an example, the HPP model (named after Hardy, Pomeau and de Pazzis) we saw in the previous figure can be used to simulate the diffusion of gases.
Though this discrete approach sounds enticing (and researchers in the mid-1970s did try out its feasibility), it has a number of drawbacks. One of the major issues was the statistical noise in the final result. However, it is from these principles and attempts to overcome the drawbacks that the LBM emerged. A web search on theoretical aspects of this method will reveal many links to derivations and the final equations. In this blog, rather than focus on the theoretical aspect, I would like to focus on the final underlying mechanism through which the lattice Boltzmann simulations are performed. So I will only touch on the final equations we will need to develop our wind tunnel. First, assume that the density and the velocities are described by the following equations:
where the fi are called distribution functions and ex, i, ey, i are discrete velocities and have the following values:
The computation of the density and velocities can be done as follows:
#10005
ex = {1, 1, 0, -1, -1, -1, 0, 1, 0};
ey = {0, 1, 1, 1, 0, -1, -1, -1, 0};
LBMDensityAndVelocity[f_] := Block[{rho, u, v},
rho = Total[f, {2}];
u = (f.ex)/rho;
v = (f.ey)/rho;
{rho, u, v}
];
These nine discrete velocities are associated with their respective distribution functions fi and can be visualized as follows:
At each (discrete) point in the lattice domain, there will exist nine of these distribution functions. A model that uses these nine discrete velocities and functions fi is called the D2Q9 model. If the distribution functions fi are known, then the velocity field is known. Since the velocity field evolves over space and time, we must expect these distribution functions to also evolve over space and time. The equation governing the spatio-temporal evolution of fi is given by:
The term i is a complicated “collision” term that basically dictates how the various fi interact with each other. Using a number of simplifications and approximations, this equation is reduced to the following:
where fieq is called the equilibrium distribution function, is called the relaxation parameter and tLBM = 1 is the time step in the lattice Boltzmann domain. This approximation is called the BGK approximation, and the resulting model is called the lattice BGK model. The detailed definitions of these two terms will be provided later. The spatial and temporal evolutions of these functions are done in two steps: (a) the streaming step; and (b) the collision step.
Since tLBM = 1 and the are all 1 or 0, the streaming step is given by:
To visualize the streaming step, imagine the discretized domain. On each grid point of this domain are nine of these distribution functions (as shown in the following figure). Note the various colors for each of the grid points. The length of each arrow indicates the magnitude of the respective distribution functions:
Based on the mathematical formulation, each distribution will be streamed in the respective directions as follows:
Note the colors and where they ended up. It would be helpful to just focus on the center point. Before the streaming step, the arrows (which represent the different fi) are all green. After the streaming step, notice where the green arrows land on the surrounding grid points. This, in short, is the streaming step. In Mathematica, this step is done very easily using the built-in functions RotateLeft or RotateRight. Here, we will be using RotateRight:
#10005
right = {0, 1};
left = {0, -1};
bottom = {1, 0};
top = {-1, 0};
none = {0, 0};
streamDir = {right, top + right, top, top + left, left, bottom + left,
bottom, bottom + right, none};
LBMStream[f_] :=
Transpose[
MapThread[Flatten[RotateRight[#1, #2]] , {f, streamDir}]];
When one does the streaming, special care has to be taken to address the boundaries of the wind tunnel. When the streaming is done, there are certain fi that become unknown at the edges and corners of the domain. The schematic (shown in the following figure) shows which fi are unknown for their respective edges and corners. The unknowns are represented by dashed arrows:
To understand how the unknown fi are computed, let us consider the top wall of the wind tunnel. For this edge, f6, f7, f8 are unknowns. We let:
is a two-dimensional unknown vector, and fieq are the equilibrium distribution functions and are defined as:
The details of this approach are given in the paper by Ho, Chang, Lin and Lin. This operation (which is actually a highly parallelizable operation) can be done efficiently in Mathematica as:
#10005
wts = {1/9, 1/36, 1/9, 1/36, 1/9, 1/36, 1/9, 1/36, 4/9};
LBMEquilibriumDistributions[rho_, u_, v_] :=
Module[{uv, euv, velMag},
uv = Transpose[{u, v}];
euv = uv.{ex, ey};
velMag = Transpose[ConstantArray[Total[uv^2, {2}], 9]];
(Transpose[{rho}].{wts})*(1 + 3*euv + (9/2)*euv^2 - (3/2)*velMag)
];
Notice that fieq are completely defined by the velocity and density. Substituting this into the equations for velocity and density, we get:
where UBC, VBC are the velocities specified by the user for the boundary. These resulting equations are linear. There are three unknowns { , Qx, Qy}, and there are three equations. These systems can easily be solved using LinearSolve or Solve. The same procedure can be used for all the edges and corners. Since this is a small 3 3 system, we can precompute the expressions for the missing distributions symbolically. So for the top wall, we would have:
#10005
Clear[f, feq, rho, ubc, vbc];
feq = EquilibriumDistributions[{rho}, {ubc}, {vbc}][[1]];
eqn1 = rho*ubc == Sum[ex[[i]]*f[i], {i, 5}] + Sum[ex[[i]]*(feq[[i]] +
wts[[i]]*(ex[[i]]*qx + ey[[i]]*qy)), {i, 6, 8}] + ex[[9]]*f[9];
eqn2 = rho*vbc == Sum[ey[[i]]*f[i], {i, 5}] + Sum[ey[[i]]*(feq[[i]] +
wts[[i]]*(ey[[i]]*qx + ey[[i]]*qy)), {i, 6, 8}] + ey[[9]]*f[9];
eqn3 = rho == Sum[f[i], {i, 5}] + Sum[(feq[[i]] +
wts[[i]]*(ey[[i]]*qx + ey[[i]]*qy)), {i, 6, 8}] + f[9];
res = FullSimplify[First[Solve[{eqn1, eqn2, eqn3}, {rho, qx, qy}]]]
Once { , Qx, Qy} are known, we then substitute the values and obtain the unknowns:
#10005
FullSimplify[Table[(feq[[i]] + wts[[i]]*(ey[[i]]*qx
+ ey[[i]]*qy)), {i, 6, 8}] /. res]
When dealing with outflow boundary conditions, we are essentially trying to impose a 0-gradient condition across the boundary, i.e. . The simplest thing one can do is to use the velocity from one grid behind and use that to compute fi at the boundaries. However, that leads to inaccurate and, in many cases, unstable results.
Consider the right wall as shown in the following figure:
After the streaming step, the distributions f4, f5, f6 are unknown. To impose an outflow condition on the distribution functions fi, we use the following relation:
where is the velocity from the previous time step at grid points (j, k), j = 1, 2, , M and fi,N(t) are the distributions at the previous time step. The scalar term u* must be positive. A good candidate is the normal velocity at the boundary, so for the left and right walls , and for the top and bottom walls it would be . If u* comes out to be 0, it can be replaced by the characteristic lattice velocity. It is important to note that the unknown distributions change depending on which wall has the outflow condition. For example, if it is the top wall, then f6, f7, f8 are unknown. Details of this approach for outflow treatment can be found in the paper by Yang.
A second approach to imposing outflow condition would be to simply do the following:
This method is much simpler than the first one, and is based on applying a second-order backward difference formula to each of the distribution functions.
Once the streaming is done and boundary conditions are imposed, a “collision” is performed (recall the rule-based approach). This step is basically figuring out how the ensemble-averaged particles are going to interact with each other. This step is a bit complicated, but using the BKG approximation, the collision step can be written as:
where are the density and velocities computed after the streaming step and boundary-adjustment step. The term vLBM is the kinematic viscosity in the lattice Boltzmann domain, and is the relaxation parameter. This term is quite important and will be discussed in a bit.
This translates to two very simple lines of code in Mathematica:
#10005
LBMCollide[f_, u_, v_, rho_, tau_] := Block[{feq},
feq = EquilibriumDistributions[rho, u, v];
f + (feq - f)/tau
];
From a visual standpoint, after the collision, the distribution functions are adjusted based on the previous formula, and we get new distributions:
Notice the change in the lengths of the arrows. That’s it! That is all it takes to run a lattice Boltzmann simulation.
Bringing Simulation to Reality with the Reynolds Number
So how is something like this supposed to simulate the fabled Navier–Stokes equations? To answer that question, we would have to go into a lot of math and multiscale analysis, but in short, the form of feq dictates the macroscopic equations that the LBM simulates. So it is actually possible to simulate a whole bunch of PDEs by using the appropriate equilibrium functions. Once again, I will leave it to the reader to go out into the internet world and get a sample of the remarkable things people simulate using the LBM.
Having seen the basic mechanism of the LBM, the obvious next question is: how do the simulations that are performed in the lattice-based system translate to the physical world? This is done by matching the non-dimensional parameters of the lattice system and the physical system. In our case, it is a single non-dimensional parameter: the Reynolds number. The Reynolds number (Rey) is defined as:
where Uphy is a characteristic velocity, Lphy is the characteristic length and vphy is the kinematic viscosity in the physical domain. In order to simulate the flow, the user is expected to specify the Reynolds number, the characteristic velocity and the characteristic length. From these three pieces of information, the kinematic viscosity is determined, and subsequently the relaxation parameter is determined. Using these pieces of information and the underlying equations associated with the LBM, the internal parameters of the simulation are computed.
The characteristic lattice velocity ULBM must never exceed (this number is computed to be the speed of sound in the lattice domain). The lattice velocity must remain significantly below this value for it to properly simulate incompressibility. In general, the lattice velocity is taken to be ULBM = 0.1. Similarly, the characteristic lattice length LLBM represents the number of points used in the lattice domain to represent the characteristic length in the physical domain. LLBM would be an integer quantity, and is typically user defined.
Let us look at an example to solidify how to relate the lattice simulation to the physical simulation. Let us assume that Rey = 100, Uphy = 1, Lphy = 1 and the physical dimensions of the wind tunnel are Lw = 15 Lphy, Hw = 5Lphy. For the lattice domain, we assume ULBM = 0.1, LLBM = 10. The lattice wind tunnel dimensions are then LW,LBM = 15 10 = 150 and Hw,LBM = 50. The viscosity in the lattice domain is:
This means that the relaxation parameter in the BGK model is:
We now have all the quantities we need to run the simulation. If we now let each lattice time step in the simulation be tLBM = 1, then we need to know what tphy is. This is done by equating the viscosities and is given by:
Therefore, if we want to run our simulation for t = 100 (time units), then in the lattice domain we would be iterating for steps.
The Reynolds number is a remarkable non-dimensional parameter. Rather than specify what fluid we are simulating at what velocities and at what dimension, the Reynolds number ties them all together. This means that if we have two systems of vastly different length, velocity scale and fluid medium, the two flows will behave the same as long as the Reynolds number remains the same.
Adding Objects to the Wind Tunnel
Let us now talk about how to introduce objects into the wind tunnel. One approach would be to discretize the objects into a jagged “steps” version of the original object and align it with the grid, then impose a no-slip boundary condition on each one of the step edges and corners:
This is really not a good approach because it distorts the original object; if one needed a good representation of the object, they would have to use an extremely fine grid—making it computationally expensive and wasteful. Furthermore, sharp corners can often induce unwanted behavior in the flow. A second approach would be to immerse the object into the grid. The boundary of the object is discretized and is immersed into the domain:
Discretizing the boundary of a specified object can easily be done using the built-in function BoundaryDiscretizeRegion. We can specify Disk or Circle to generate a set of points that represents the discretized version of the circular object:
#10005
bmr = BoundaryDiscretizeRegion[Disk[]];
pts = MeshCoordinates[bmr];
Show[bmr, Graphics[Point[pts]], ImageSize - 250]
This method of discretizing the object and placing it inside the grid is called the immersed boundary method (IBM). Once the discretized object is immersed, the effect of that object on the flow needs to be modeled while making sure that the velocity conditions of the boundary are respected. One method of making the flow “feel the presence” of the immersed object is through a method called the direct forcing method. With this approach, the lattice BGK model is modified by adding a forcing term Fi to the evolution equation:
where is the corrective force induced on a grid point by the object boundaries. The equation for computing the velocity is now modified as:
The corrective force is computed as:
where are the boundary conditions of the object, are the velocities at the object boundaries if the object was not present, is an approximation to the delta function and are the positions of the boundary points of the object and are generally called Lagrangian boundary points. There are several choices that one can use. We will make use of the following compactly supported function:
This approximation is also called the mollifier kernel and can be defined using the Piecewise function:
#10005
Clear[deltaFun];
deltaFun[r_?NumericQ] :=
Piecewise[{{(5 - 3 Abs[r] - Sqrt[1 - 3 (1 - Abs[r])^2])/6,
0.5 = Abs[r] = 1.5}, {(1 + Sqrt[1 - 3 r^2])/3,
Abs[r] = 0.5}}];
#10005
Plot[deltaFun[r], {r, -2, 2}, ImageSize - 300]
The 2D function (x – X, y – Y) is then given by:
where dx, dy are scaling parameters. For Lagrangian point (Xi, Yi), a function is specified. Here is what the delta function would look like if centered at (1/2,1/2) and scaled by 1:
#10005
Plot3D[deltaFun[(x - 1/2)]*deltaFun[(y - 1/2)], {x, -2, 2}, {y, -2,
2}]
Let’s look at an example to demonstrate this immersed boundary concept, as well as how the function is constructed and how it is used for approximating a function. Assume that a circle is immersed in a rectangular domain:
#10005
Clear[deltaFun];
deltaFun[r_?NumericQ] :=
Piecewise[{{(5 - 3 Abs[r] - Sqrt[1 - 3 (1 - Abs[r])^2])/6,
0.5 #3] , {Range[n],
influenceGridPtsIndex, dval}]];
dMat = SparseArray[t, {n, Length[grid]}]
This matrix can now be used to compute the values at the Lagrangian points. For example, let us assume that the underlying grid has values on it defined by h(x, y) = Sin(x + y), then the values at the Lagrangian points are computed as:
... where D is the discretization of and h(xj, yj) are the function values to be computed at (xj, yj):
#10005
bptVal = dMat.Sin[Total[grid, {2}]];
We can compare the computed interpolated value to the actual values:
#10005
ListLinePlot[{bptVal, Sin[Total[bpts, {2}]]}, ImageSize -> 300,
PlotStyle -> {Red, Black}, PlotLegends -> {"Computed", "Actual"}]
Similarly, the function values at the grid can be computed using the function values at the Lagrangian points as:
#10005
wts = IdentityMatrix[n];
wts[[1, 1]] = wts[[-1, -1]] = 0.5;
wts *= (Norm[bpts[[2]] - bpts[[1]]])/(dx*dy);
gridVal = Sin[Total[bpts, {2}]].wts.dMat;
hfun = Interpolation[Transpose[{grid, gridVal}]];
Plot3D[hfun[x, y], {x, -2, 2}, {y, -2, 2}, PlotRange -> All]
As you can see, since the functions have compact support, only grid points that lie in their radius of influence get interpolated values. All grid points that are not in their support radius are 0.
So, to remind the readers, one single step in the lattice Boltzmann simulation consists of the following steps:?
Perform the streaming step.
Adjust distribution functions at the boundaries.
Perform the collision step.
Compute the velocities .
Compute the velocities at the Lagrangian boundary points of the objects.
For each boundary point of the object, compute the corrective force needed to enforce the boundary conditions at that point.
Compute the corrective forces at the lattice grid points using the forces obtained in step 6.
Perform the streaming and collision steps, taking the forces into account.
Calculate density and velocities.
This concludes all the necessary ingredients needed to run a wind tunnel simulation using the LBM in 2D.
Examples
To make this wind tunnel easy to use, I have put all these functions into a package called WindTunnel2DLBM. It contains a number of features and allows for easy setup of the problem by a user. I would recommend the interested user go through the package documentation for details. The focus here will be on the various examples and the flexibility our computational wind tunnel setup offers.
The first example is the flow in the wind tunnel. This is perhaps the simplest case. A schematic of the domain and its associated boundary conditions are shown here:
In this case, there is only one length scale to the problem: the height of the wind tunnel. Therefore, that becomes our characteristic length scale. The characteristic velocity in this case is the maximum velocity coming from the inlet, which is set to 1. All that remains is to specify the Reynolds number at which the simulation is to be carried out. This is user defined as well. Let us take the length of the wind tunnel to be 6 units going from (0,6), and the height to be 2 units going from (-1,1). We now set up the simulation:
#10005
WindTunnel2DLBM`;
#10005
Rey = 200;
charLen = 1;
charVel = 1;
ic = Function[{x, y}, {0, 0}];
state = WindTunnelInitialize[{Rey, charLen, charVel},
ic, {x, 0, 6}, {y, -1, 1}, t]
Notice that we did not provide any boundary condition information here. That is because the wind tunnel defaults to the flow in a channel case, and therefore all the boundary conditions are automatically imposed. All we have to specify are the characteristic information and the dimensions of the wind tunnel.
The simulation is performed using a fixed time step. The time step is internally computed and can be accessed from the following property:
#10005
state["TimeStep"]
Let us now run the simulation for a period of 5 time units:
#10005
WindTunnelIterate[state, 5];
state
We can query the data at the final step of the simulation:
#10005
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
The solution can be visualized in a variety of ways. For 2D simulation, a streamline plot can reveal some useful information. Let us visualize the streamline plot:
#10005
StreamPlot[{usol[x, y], vsol[x, y]}, {x, 0, 6}, {y, -1, 1},
AspectRatio - Automatic, ImageSize - 400, PlotRangePadding - None]
Notice that the streamlines are not completely parallel (there is a bit of deviation). To see why, let us look at the profiles of the u component of the velocity field at various x locations:
#10005
Plot[Evaluate[{usol[#, y] /@ Range[0, 6, 2]}], {y, -1, 1},
ImageSize - 300, PlotLegends - Range[0, 6, 2],
PlotStyle - {Black, Red, Blue, Green}]
This indicates that the velocities have a spatial dependence. For this particular problem, we should expect the flow to reach steady state, i.e. the flow should not vary with time. Let us run the simulation for an additional 20 time units and see the velocity profile:
#10005
WindTunnelIterate[state, 20];
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
Plot[Evaluate[{usol[#, y] /@ Range[0, 6, 2]}], {y, -1, 1},
ImageSize - 300, PlotLegends - Range[0, 6, 2],
PlotStyle - {Black, Red, Blue, Green}]
We see that the velocity profiles at the various x locations are almost the same as each other. This gives us an indication that the flow is indeed reaching steady state.
The Flow-in-a-Box Problem
Let us now look at the classic flow-in-a-box problem. Here’s the schematic of the domain and the boundary condition information:
The top wall moves with a horizontal velocity of 1 (length units/time units), while all the others are stationary, no-slip walls. The circles inside the box denote the kind of fluid behavior that might be expected. As the top wall moves, the wall drags the fluid below it, causing the fluid to rotate—that is the big circle in the schematic, and it represents a vortex. If there is sufficient strength in the main vortex, then we can expect it to start causing smaller, secondary vortices to form. Our hypothesis is that the strength of the vortex should be related to the Reynolds number. Let us see what happens by running the simulation at Reynolds number 100:
#10005
state = WindTunnelInitialize[{100, 1, 1},
Function[{x, y}, {0, 0}], {x, 0, 1}, {y, 0, 1}, t,
"CharacteristicLatticePoints" - 60,
"TunnelBoundaryConditions" - {"Left" - "NoSlip",
"Right" - "NoSlip", "Bottom" - "NoSlip",
"Top" - Function[{x, y, t}, {1, 0}]}];
We now iterate for 50 time units:
#10005
WindTunnelIterate[state, 50];
Visualizing the result shows us that there is a primary vortex that forms near the middle, while a smaller, secondary vortex forms at the bottom right of the box:
#10005
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
StreamPlot[{usol[x, y], vsol[x, y]}, {x, 0, 1}, {y, 0, 1},
AspectRatio - Automatic, StreamPoints - Fine,
PlotRangePadding - None, ImageSize - 300]
If the Reynolds number is ramped up, then these secondary vortices become stronger and larger, and additional vortices start developing in the corners. Let us look at the case when the Reynolds number is 1,000:
#10005
state = WindTunnelInitialize[{1000, 1, 1},
Function[{x, y}, {0, 0}], {x, 0, 1}, {y, 0, 1}, t,
"CharacteristicLatticePoints" - 60,
"TunnelBoundaryConditions" - {"Left" - "NoSlip",
"Right" - "NoSlip", "Bottom" - "NoSlip",
"Top" - Function[{x, y, t}, {1, 0}]}];
We will again iterate for 50 time units:
#10005
WindTunnelIterate[state, 50];
Let us visualize the result:
#10005
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
StreamPlot[{usol[x, y], vsol[x, y]}, {x, 0, 1}, {y, 0, 1},
AspectRatio - Automatic, StreamPoints - Fine,
PlotRangePadding - None, ImageSize - 300]
Notice that the primary vortex has moved closer to the center; from the looks of it, it’s strong enough to be able to form secondary vortices at the bottom left and bottom right of the domain.
Let us now see what happens if we do the simulation on a “tall” box rather than a square one. The boundary conditions remain the same, but the domain changes in the y direction:
#10005
state = WindTunnelInitialize[{1000, 1, 1},
Function[{x, y}, {0, 0}], {x, 0, 1}, {y, 0, 2}, t,
"CharacteristicLatticePoints" - 60,
"TunnelBoundaryConditions" - {"Left" - "NoSlip",
"Right" - "NoSlip", "Bottom" - "NoSlip",
"Top" - Function[{x, y, t}, {1, 0}]}];
Run the simulation and use ProgressIndicator to track the progress. This simulation will take a few minutes:
#10005
ProgressIndicator[Dynamic[state["CurrentTime"]], {0, 50}]
AbsoluteTiming[WindTunnelIterate[state, 50]]
Visualize the streamlines:
#10005
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
StreamPlot[{usol[x, y], vsol[x, y]}, {x, 0, 1}, {y, 0, 2},
AspectRatio - Automatic, StreamPoints - Fine,
PlotRangePadding - None, ImageSize - Medium]
In the tall-box scenario, a primary vortex is developed near the top wall, and that vortex in turn creates another vortex below it. If that second vortex is strong enough, it will create vortices at the bottom corners of the box.
We can already see the flexibility our wind tunnel is providing us. Let us now put an object inside the wind tunnel and observe the behavior of the flow. For this example, use a circular object:
This is the same as flow in a channel (our first example), but with an object placed in the channel. Notice now that there are are two length scales d and H. The choice of the characteristic length, though arbitrary, must tie back to some aspect of the physics of the flow. In this example, if the size of the object was to be increased or decreased, then the flow pattern behind it would be expected to change. Therefore, the natural choice is to use d as the characteristic length.
Let us place the cylinder at (3,0) in the domain. Let the size of the cylinder be 1 length unit. Therefore, the characteristic scale will be 1. Let the domain size be (0, 15) (–2, 2). The object is specified as a ParametricRegion:
#10005
Remove[state];
state = WindTunnelInitialize[{200, 1, 1},
Function[{x, y}, {0, 0}], {x, 0, 15}, {y, -2, 2}, t,
"CharacteristicLatticePoints" - 15,
"ObjectsInTunnel" - {ParametricRegion[{3 + Cos[s]/2,
Sin[s]/2}, {{s, 0, 2 Pi}}]}]
It is a good idea to visualize the tunnel before starting the simulation, to make sure the object is in the correct position:
#10005
ListLinePlot[state["ObjectsInTunnel"],
PlotRange - {{0, 14}, {-2, 2}}, AspectRatio - Automatic,
Axes - False, Frame - True, ImageSize - Medium,
PlotLabel -
StringForm["GridPoints: ``", Reverse@state["GridPoints"]]]
Let us simulate the flow for 10 time units:
#10005
WindTunnelIterate[state, 10];
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
Rasterize@
Show[LineIntegralConvolutionPlot[{{usol[x, y], vsol[x, y]}, {"noise",
300, 400}}, {x, 0, 15}, {y, -2, 2}, AspectRatio - Automatic,
ImageSize - Medium, PlotRangePadding - None,
LineIntegralConvolutionScale - 2, ColorFunction - "RoseColors"],
ListLinePlot[state["ObjectsInTunnel"],
PlotStyle - {{Thickness[0.005], Black}}]]
There are two things to notice here: the symmetric pair of vortices behind the cylinder and the flow inside the cylinder. A close-up reveals that there is some flow pattern inside the cylinder as well:
#10005
Show[StreamPlot[{usol[x, y], vsol[x, y]}, {x, 2.4, 3.6}, {y, -1/2,
1/2}, AspectRatio - Automatic, ImageSize - Medium,
PlotRangePadding - None],
ListLinePlot[state["ObjectsInTunnel"],
PlotStyle - {{Thickness[0.01], Blue}}]]
This behavior is because we are making use of the IBM. As mentioned earlier, the IBM computes a set of forces to be applied on the grid points such that the velocity at the surface representing the surface is 0. It does not specify what needs to happen inside the cylinder. Therefore, being an incompressible flow, there exists a flow pattern inside the cylinder as well. The important thing is that the velocities at the boundaries of the object are 0 (no-slip).
Let us now continue to iterate for 30 time units and see what happens to the pattern behind the cylinder. Sometimes, it can be helpful to look at another variable called vorticity to get a better understanding of what is happening:
#10005
WindTunnelIterate[state, 30];
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
Set up the color scheme for the contours:
#10005
cc = N@Range[-3, 3, 4/100];
cc = DeleteCases[cc, x_ /; -0.4 = x = 0.4];
cname = "VisibleSpectrum";
cdata = ColorData[cname];
crange = ColorData[cname, "Range"];
cMinMax = {Min[cc], Max[cc]};
colors = cdata[Rescale[#, cMinMax, crange]] /@ cc;
Visualize the vorticity:
#10005
Remove[vort];
vort = D[usol[x, y], y] - D[vsol[x, y], x];
Rasterize@Show[ContourPlot[vort, {x, 0, 15},
{y, -2, 2}, AspectRatio - Automatic, ImageSize - 500,
Contours - cc, ContourShading - None, ContourStyle - colors,
PlotRange - {{0, 15}, {-2, 2}, All}],
Graphics[Polygon[state["ObjectsInTunnel"]]]]
We now notice that the symmetric pattern has been destroyed and is replaced by this “wavy” behavior; the vorticity clearly shows the wavy behavior. What we notice here is called an instability in the wake of the cylinder. This instability continues to amplify, and eventually vortices start forming behind the cylinder. This phenomenon is called “vortex shedding.” There is a shear layer generated at the surface of the cylinder that gets carried downstream.
This vortex shedding is also dependent on the Reynolds number. For small enough numbers, we don’t get any shedding. However, at around 100–150, the shedding is observed. To properly observe this phenomena, it would be good to see the time evolution of this flow. As a first step, set up the problem by defining the characteristic terms and the objects in the tunnel:
#10005
state = WindTunnelInitialize[{200, 1, 1},
Function[{x, y}, {0, 0}], {x, 0, 15}, {y, -2, 2}, t,
"CharacteristicLatticePoints" - 15,
"ObjectsInTunnel" - {ParametricRegion[{3 + Cos[s]/2,
Sin[s]/2}, {{s, 0, 2 Pi}}]}];
To produce a time evolution of the vorticity, we will extract the solution at each time unit and generate a series of plots:
#10005
cc = N@Range[-5, 5, 10/200];
cc = DeleteCases[cc, x_ /; -0.5 = x = 0.5];
cname = "VisibleSpectrum";
cdata = ColorData[cname];
crange = ColorData[cname, "Range"];
cMinMax = {Min[cc], Max[cc]};
colors = cdata[Rescale[#, cMinMax, crange]] /@ cc;
res = Table[
WindTunnelIterate[state, t];
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
vort = D[usol[x, y], y] - D[vsol[x, y], x];
plot =
Show[ContourPlot[vort, {x, 0, 15}, {y, -2, 2},
AspectRatio - Automatic, ImageSize - Medium, Contours - cc,
ContourShading - None, ContourStyle - colors,
PlotRange - {{0, 15}, {-2, 2}, All}],
Graphics[Point /@ state["ObjectsInTunnel"]]];
Rasterize[plot]
, {t, 0, 50, 1}];
Running the simulation clearly shows two vortices forming in the back of the cylinder with the shear layer slowly getting perturbed, which then increases in amplitude before finally breaking into a vortex shedding:
#10005
ListAnimate[res, DefaultDuration - 10, AnimationRunning - False]
Observing Disturbances Caused by a Moving Object
For our next example, we will exploit the immersed boundary treatment and “immerse” a circular tank inside our wind tunnel. The boundary of the tank will have 0-velocities. Inside this tank, we will immerse an elliptical object. This object is placed near the tank wall and follows the tank boundary in a circular path. The flexibility of the lattice Boltzmann method with immersed boundary allows us great flexibility with moving objects. The objective is to study what kind of disturbances develop when this object moves through a still fluid.
Set up the problem by defining the characteristic terms. In this case, the simulation will be performed at a Reynolds number of 400. The characteristic length and velocity are specified as unity. There are two objects in the tunnel. The first object is the large circular tank that is held stationary; the second is the elliptical object that will be moving inside this tank:
#10005
Remove[state];
state = WindTunnelInitialize[{400, 1, 1},
Function[{x, y}, {0, 0}], {x, -2.2, 2.2}, {y, -2.2, 2.2}, t,
"CharacteristicLatticePoints" - 25,
"TunnelBoundaryConditions" - {"Left" - "NoSlip",
"Right" - "NoSlip", "Top" - "NoSlip", "Bottom" - "NoSlip"},
"ObjectsInTunnel" - {{ParametricRegion[{1.3 + 0.2*Sin[s],
0.5*Cos[s]}, {{s, 0, 2 Pi}}], Function[{xb, yb, t}, {-yb, xb}]},
{ParametricRegion[{2*Sin[s], 2*Cos[s]}, {{s, 0, 2 Pi}}]}}]
As always, it is a good idea to check the geometry of the underlying problem. We can do that by simply extracting the discretized object when doing the initialization; we can see that everything is where it is supposed to be:
#10005
Graphics[Map[Line, state["ObjectsInTunnel"]], Frame - True,
ImageSize - Small]
As we did before, we will be looking at the vorticity contours of the flow. Let us first define the color scheme and the levels of contours that will be plotted:
#10005
cc = N@Range[-7, 7, 14/100];
cc = DeleteCases[cc, x_ /; -0.1 = x = 0.1];
cname = "VisibleSpectrum";
cdata = ColorData[cname];
crange = ColorData[cname, "Range"];
cMinMax = {Min[cc], Max[cc]};
colors = cdata[Rescale[#, cMinMax, crange]] /@ cc;
The simulation is now run for 60 time units:
#10005
oreg = RegionPlot[x^2 + y^2 = 2^2, {x, -2.2, 2.2}, {y, -2.2, 2.2},
PlotStyle - Black];
AbsoluteTiming[res = Table[
WindTunnelIterate[state, tt];
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
vort = D[usol[x, y], y] - D[vsol[x, y], x];
Rasterize@
Show[ContourPlot[vort, {x, -2, 2}, {y, -2, 2},
AspectRatio - Automatic, ImageSize - 300, Contours - cc,
ContourShading - None, ContourStyle - colors,
PlotRange - {{-2, 2}, {-2, 2}, All}],
Graphics[Polygon[First[state["ObjectsInTunnel"]]]], oreg], {tt,
0, 60, 1/2}];]
Running the time evolution of the fluid disturbance shows that a very beautiful geometric pattern is formed within the tank initially before settling down to a more uniform circular disturbance:
#10005
ListAnimate[res, DefaultDuration - 10, AnimationRunning - False,
ImageSize - Automatic]
Flow in a Pipe with Bends and Obstacles
For the sake of curiosity (and fun), what kind of flow pattern would we expect for the following geometry?
Fluid enters the pipe from the right end, moves up the pipe and then gets discharged from the left end. What would be the effect of that stopper at the right end? How will it impact the discharge?
This is surprisingly easy to figure out with our current setup. Again, we just immerse our pipe and the obstacle within it into our wind tunnel. The left, right and top boundaries of the wind tunnel are given a 0-velocity condition. The bottom boundary is given an outflow condition from –1 x –0.7, a 0-velocity condition from –0.7 x 0.7 and a parabolic velocity profile from 0.7 x 1:
#10005
Remove[state];
inletVel = Fit[{{7/10, 0}, {17/20, 1}, {1, 0}}, {1, x, x^2}, x];
state = WindTunnelInitialize[{500, 0.3, 1},
Function[{x, y}, {0, 0}], {x, -1.1, 1.1}, {y, 0, 1.1}, t,
"CharacteristicLatticePoints" - 20,
"TunnelBoundaryConditions" - {"Left" - "NoSlip",
"Right" - "NoSlip", "Top" - "NoSlip",
"Bottom" -
Function @@
List[{x, y, t},
If @@ List[0.7 = x = 1., {0, inletVel},
If[-1 = x = -0.7, "Outflow", {0, 0}]]]},
"ObjectsInTunnel" - {ImplicitRegion[
0.7 = (x^4 + y^4)^(1/4) = 1, {{x, -1, 1}, {y, -0.2, 1}}],
ParametricRegion[{0.22 + t, t - 0.2}, {{t, 0.55, 0.7}}]}]
Let us run it for 40 time units:
#10005
ProgressIndicator[Dynamic[state["CurrentTime"]], {0, 40}]
AbsoluteTiming[WindTunnelIterate[state, 40]]
Let us plot the vorticity:
#10005
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
vort = D[usol[x, y], y] - D[vsol[x, y], x];
cc = N@Range[-20, 20, 40/50];
cc = DeleteCases[cc, x_ /; -0.1 = x = 0.1];
cname = "VisibleSpectrum";
cdata = ColorData[cname];
crange = ColorData[cname, "Range"];
cMinMax = {Min[cc], Max[cc]};
colors = cdata[Rescale[#, cMinMax, crange]] /@ cc;
Rasterize@
Show[ContourPlot[vort, {x, -1, 1}, {y, 0, 1},
AspectRatio - Automatic, ImageSize - Medium, Contours - cc,
ContourShading - None, ContourStyle - colors,
PlotRange - {{-1, 1}, {0, 1}, All},
RegionFunction - Function[{x, y}, 0.7 = (x^4 + y^4)^(1/4) = 1]]
, Graphics[Point /@ state["ObjectsInTunnel"]]]
We see that the obstacle/stopper introduces a vortex shedding, which travels down the pipe. Let us look at the velocities at y = 0:
#10005
Plot[vsol[x, 0], {x, -1, 1}, PlotRange - {{-1, 1}, {-1, 1}},
ImageSize - Medium]
If we compare the velocity profile between the outlet (at the left) and inlet (at the right), we see that the outlet velocity is almost half of the inlet. This gives us compelling evidence that the stopper has caused a reduction in fluid discharge from the left end of the pipe, which is to be expected.
Simulating the Flow over an Airfoil
As a final example, let us look at the flow around an airfoil. An airfoil basically represents the cross-section of an airplane wing, and is the fundamental thing that actually allows an airplane to lift off the ground. There are many types of airfoils, but we will focus on a simple one where the airfoil is described by the parametric equation The parameter a controls how thick the airfoil should be, and the parameter b controls the curvature of the airfoil:
#10005
Clear[mat, a, b, t, AOA];
Manipulate[
mat = {{Cos[AOA Degree], -Sin[AOA Degree]}, {Sin[AOA Degree],
Cos[AOA Degree]}};
ParametricPlot[
mat.{t^2, 0.2 (t - t^3 + (t^2 - t^4)/b)/a}, {t, -1, 1},
AspectRatio - Automatic, ImageSize - Medium,
PlotRange - {{0, 1}, {-0.2, 0.5}}], {{a, 1}, 0.1, 10}, {{b, 0.9},
0.3, 10}, {{AOA, 0, "Angle of Attack"}, -20, 20}]
In order for the aircraft to get “lift,” i.e. be able to get off the ground, the top surface of the airfoil should have a pressure distribution that is lower than the bottom surface. This pressure difference causes the wing to lift upward (along with anything attached to it). This pressure difference is achieved by having wind blow over its surface at significantly high speeds. A second consideration is that the wing generally needs to be tilted or have an “angle of attack” to it. By doing this, we ensure greater lift. We will also give the airfoil a –10 angle of attack. The simulation will be run for a Reynolds number of 1,000. Now, I should point out that a Reynolds number of 1,000 is a rather small value. A typical Reynolds number for small aircraft is around 1 million. A full-scale simulation is just not possible on a laptop because of the large grid size. However, even at 1,000, we should be able to get a good understanding of the underlying dynamics. For this example, a uniform flow fill comes in from the left. The top and bottom tunnel boundaries are set to be periodic, and the right boundary is set to an outflow. The characteristic length here will be the thickness of the airfoil:
#10005
state = WindTunnelInitialize[{1000, 0.2, 1},
Function[{x, y}, {0, 0}], {x, -2, 6}, {y, -1., 1.}, t,
"CharacteristicLatticePoints" - 20,
"CharacteristicLatticeVelocity" - 0.05,
"TunnelBoundaryConditions" - {"Left" -
Function[{x, y, t}, {1, 0}], "Right" - "Outflow",
"Top" - "Periodic"},
"ObjectsInTunnel" - {ParametricRegion[{{Cos[-10 Degree], -Sin[-10 \
Degree]}, {Sin[-10 Degree], Cos[-10 Degree]}}.{t^2,
0.2 (t - t^3 + (t^2 - t^4)/0.9)/1}, {{t, -1, 1}}]}]
Before starting the simulation, extract the discretized object and check that it is in the appropriate location within the wind tunnel:
#10005
ListLinePlot[state["ObjectsInTunnel"],
PlotRange - Evaluate[state["Ranges"]], AspectRatio - Automatic,
Axes - False, Frame - True, ImageSize - 400,
PlotLabel -
StringForm["GridPoints: ``", Reverse@state["GridPoints"]]]
Notice the large number of grid points. This is because we are allowing 20 lattice points to resolve the thin airfoil. We now run the simulation for 10 time units. This simulation takes a bit of time to finish 10 time units’ worth of simulation because: (a) the resolution (i.e. the number of grid points needed for running this simulation) is quite large (800 200); and (b) to complete the simulation, 20,000 iterations must be performed:
#10005
10/state["TimeStep"]
Start the iteration process:
#10005
AbsoluteTiming[WindTunnelIterate[state, 10]]
Let us first look at the vorticity plot:
#10005
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
vort = D[usol[x, y], y] - D[vsol[x, y], x];
cc = N@Range[-15, 15, 30/60];
cc = DeleteCases[cc, x_ /; -0.1 = x = 0.1];
cname = "VisibleSpectrum";
cdata = ColorData[cname];
crange = ColorData[cname, "Range"];
cMinMax = {Min[cc], Max[cc]};
colors = cdata[Rescale[#, cMinMax, crange]] /@ cc;
Show[ContourPlot[vort, {x, -0.5, 5}, {y, -1, 1},
AspectRatio - Automatic, ImageSize - 500, Contours - cc,
ContourShading - None, ContourStyle - colors,
PlotRange - {{-0.5, 5}, {-1, 0.5}, All}]
, Graphics[{FaceForm[White], EdgeForm[Black],
Polygon[state["ObjectsInTunnel"][[1]]]}]]
Just as in the case of the bluff body, we are seeing vortex shedding. For the case of the airfoil, this is not really a desirable property. We ideally want the flow to hug the surface. When the flow separates (as you see on the top surface of the airfoil), the pressure drop is not achieved properly and the airfoil will be unable to generate lift.
Let us now look at the pressure. Rather than plotting the pressure, we will plot a non-dimensional parameter called the pressure coefficient, defined by Cp = 2(p – p )/( LBM U2LBM), where p is the pressure far upstream. We are interested in looking at the pressure at the object’s surface:
#10005
PressureCoefficient[x_?NumericQ,
y_?NumericQ] := (psol[x, y] - psol[-2, 0])/(0.5*
state["InternalVelocity"]^2)
#10005
objs = state["ObjectsInTunnel"][[1]];
psol = "P" /. state[state["CurrentTime"]];
pp = Apply[psol, objs, 1];
pp = (pp - psol[-2, 0])/(0.5*state["InternalVelocity"]^2);
ListPlot[Transpose[{objs[[All, 1]], pp}], PlotRange - All,
Axes - False, Frame - True,
FrameLabel - {"x \[Rule]", "\!\(\*SubscriptBox[\(C\), \(p\)]\)"},
FrameStyle - Directive[Black, 14], ImageSize - Medium]
You will notice that there are two lines here. The lower line represents the pressure on the top surface, while the top line represents the pressure on the bottom surface. It is clear that despite some separation from the airfoil, we are getting some pressure differences. We can also plot the pressure contours and visualize them near the airfoil:
#10005
Show[Quiet@
ContourPlot[
PressureCoefficient[x, y], {x, -0.5, 1.5}, {y, -0.4, 0.4},
AspectRatio - Automatic, PlotRangePadding - None,
ColorFunction - "TemperatureMap", Contours - 40,
PlotLegends - Automatic, PlotRange - All, ImageSize - Medium],
Graphics[{FaceForm[White], EdgeForm[Black],
Polygon[state["ObjectsInTunnel"][[1]]]}]]
If you look carefully at the color scheme, you will indeed see that the top-surface pressure is less than the bottom surface. So perhaps there is hope with this airfoil. The fluid-dynamic property that we have just explored is called the Bernoulli principle, which has applications in aviation (as we have seen here) and in fields such as automotive engineering.
This is just the start—there are many more examples you can try out! What we have discussed here is a good place to begin exploring this alternative approach to studying fluid dynamics problems and their implementation in Mathematica. The LBM combined with the IBM is a good tool for anyone interested in studying and analyzing fluid flows. With the help of Mathematica’s built-in functions, putting together the numerical wind tunnel is quite straightforward. The WindTunnel2DLBM package has helped me explore many fascinating concepts in the field of fluid dynamics (and make stunning visualizations). I hope you too will get inspired and dive into the exploration of fluid-flow phenomena.
Get full access to the latest Wolfram Language functionality with a Mathematica 12 or Wolfram|One trial.
Engage with the code in this post by downloading the Wolfram Notebook
?
Êîììåíòàðèè (8)
Microsoft Excel is among the most popular tools in the world. For non-technical and advanced users aspiring to extend beyond Excel’s built-in feature set, we’re proud to announce the easiest and most productive tool for doing so: Wolfram CloudConnector for Excel, now available to anyone running Excel on a Windows system. You can access the advanced computational power of the Wolfram Language for your data directly from your spreadsheets.
We can also add additional parameters to the Wolfram function. These values get slotted into the expression. RandomWord can take additional parameters, such as a number, which will generate that many words:
✕
RandomWord[5]
So in Excel, we can write:
=Wolfram("RandomWord",5)
Even though we have written this in a single Excel cell, the output fills out into a column.
Mixing Wolfram Language and Excel Code
CloudConnector automatically converts the Wolfram Language output to be displayed in Excel. If you wanted to get all the business days for the next 10 days, you would use the function DayRange:
✕
DayRange[Now, DayPlus[Now, 10], "BusinessDay"]
Now let’s run this expression in a spreadsheet with the Wolfram function, applying quotes and escaped quotes as necessary. We add an “&” on the end to make the expression a pure function:
Any update to the cell being used as an argument (in this case, B2) will trigger recalculation of the formula in Excel.
Centralize as an APIFunction
Often you will want to store the code outside the spreadsheet, either because you don’t want users to see or edit it, or because you want to be able to push centralized updates to many users simultaneously. Deploying the code as an API and then calling it from the spreadsheet addresses this need.
Converting the Wolfram Language code we had before into an APIFunction only requires some minor changes:
This formula evaluates entirely in the cloud. The source code is never seen by the caller of the API.
Advanced Computation, Minimal Code
With additional knowledge of the Wolfram Language, you can develop powerful Wolfram APIs, which would normally require a long and tedious development process in other systems.
Here, an APIFunction calculates the shortest tour of the 20 largest cities in a country and displays the result:
This is an APIFunction that takes a single parameter, "Location", which must be a "Country":
Notice how the Excel formula creates an image. This is specialized functionality built for CloudConnector that allows you to make changes to spreadsheet values to trigger updates to this image. With this short amount of code, you can connect your spreadsheets to the full computational power of the Wolfram Language.
Wolfram CloudConnector for Excel creates a user-focused, feature-filled link from Excel to the Wolfram Cloud, while cutting developer time for more advanced computation. CloudConnector is a free plugin that runs on Excel for Windows. It does not require any additional Wolfram technology to be installed—just an appropriate Wolfram Cloud Account to create APIs. CloudConnector for Excel is available now for use in both the Wolfram Cloud and Wolfram Enterprise Private Cloud.
It’s been a whirlwind week of talks, training, workshops, networking and special events, and we’ve just closed another successful Wolfram Technology Conference! The week offered a multitude of opportunities for attendees and internal staff alike to connect, learn and enjoy unique experiences one can only get in Champaign, Illinois, every October. I’m happy to provide some highlights from the week and invite you to save the date to join us next year: October 6–9, 2020.
We began this week with pre-conference training on topics from machine learning and neural networks to application building and “Computational X,” offering headquarters tours and an opening reception before the “real” conference even began. Monday’s opening keynote by CEO Stephen Wolfram covered a ton of ground, from a Version 12 recap to a roadmap of things to come. True to tradition, Stephen uncovered bugs in pre-release versions of our software, livecoded examples and gave the audience so much to look forward to.
Three Decades of Keynotes
This year had our 30th keynote address by Stephen Wolfram. In addition to this three-decade milestone, we had our first livestreamed keynote. Stephen touched on the 12.0 release in April as well as the progress toward 12.1 since then, discussed major accomplishments in the Wolfram Cloud and Wolfram Notebooks, and gave a sneak peek of some fascinating projects to come. Here are some highlights:
The Wolfram Notebook Archive was introduced as a way to preserve live, interactive computational notebooks in perpetuity, especially in conjunction with the major improvements in cloud notebook publishing.
Brand-new course content is coming to Wolfram U, including the release of an interactive image processing course to debut next week. Also coming to Wolfram U are integrated course-authoring tools and a new math and science initiative.
The recent release of Wolfram|Alpha Notebook Edition allows any student to build up and work through computations right inside a Wolfram Notebook, just as they would in Wolfram|Alpha.
The release of 12.1 in the next few months will include dozens of new functions, as well as higher DPIs for Windows and an increase in workflows crosslinked to other documentation. Additionally, 12.1 will feature a new Standard Notebook toolbar.
Work toward incremental quality development will be prioritized in order to tackle many lower-level bugs that still exist.
A Wolfram Citings page will be added to Wolfram Community in order to share the unique creations people are making with Wolfram products. Sharing is also easier than ever with new functionality allowing users to embed notebooks in Community posts.
A new initiative to find a fundamental theory of physics using computational technology has been launched. A team has been assembled to work toward this goal, and Stephen will be livestreaming his search for the fundamental theory of physics.
and, of course, much more! So be sure to check out the keynote in its entirety for Stephen’s account of the past year, as well as the path ahead.
Wolfram Innovator Award Winners
Each year at the Wolfram Technology Conference, Stephen recognizes a number of outstanding individuals whose work with the Wolfram Language has been exemplary. Congratulations to the 11 winners of the 2019 Wolfram Innovator Award:
Thomas Burghardt, PhD, Mayo Clinic Rochester: for the application of neural networks constructed with machine intelligence tools in the study of inheritable heart disease.
Todd Feitelson, Millbrook School: for innovative educational techniques utilizing computational thinking and 3D printing in high-school classrooms.
Chris Hanusa, PhD, CUNY Queens College: for creating tools to advance the visualization of concepts in the classroom through computational technology.
Joo-Haeng Lee, PhD, Electronics and Telecommunications Research Institute: for developing a unique and powerful pixel-based color transition algorithm (PixelSwap) and his work in synthetic learning sets.
Casey Mulligan, PhD, University of Chicago, Former Chief Economist for the White House Council of Economic Advisers: for his innovative work on automated economic reasoning, which can begin with purely qualitative assumptions.
Flip Phillips, PhD, Skidmore College: for his work on the visual and haptic perception of two- and three-dimensional shapes, psychological aesthetics and cortical plasticity related to blindness and visual restoration.
Robert Rasmussen, PhD, and William Kirk Reinholtz, NASA Jet Propulsion Laboratory: for optimizing the integration of mission operation systems and preserving consistent and accountable information throughout the operations processes.
Jane Shen-Gunther, MD, Colonel, US Army, Brooke Army Medical Center: for automating data processing for DNA sequencing in gynecological oncology and HPV detection and integrating interactive visualizations into reporting structures.
Yehuda Ben-Shimol, PhD, Ben-Gurion University of the Negev: for introducing thousands of students and fellow faculty to the use of computational thinking in communications systems engineering as well as contributions to the advancement of earthquake prediction.
Mihai Vidrighin, PhD, PsiQuantum: for building comprehensive models of nonlinear and quantum optics to describe spontaneous parametric photon-pair generation and quantum optics circuits.
Crowning Our Livecoding Champion
On Wednesday night, we held our annual Wolfram Livecoding Championship, a favorite special event now in its fourth year. Conference guests and internal enthusiasts alike competed to most accurately and most quickly answer seven Wolfram Language programming questions. This year, all contestants who earned at least a point on the questions were eligible for prizes, in addition to our first-, second- and third-place winners. Questions included:
Which 10 people have the most Wolfram Language symbols named after them (eponyms)? Return the most common country of birth among them as a country entity.
Evolve the rule 30 cellular automaton for 2,000 steps starting from a single 1 centered on a background of 0s.
Given the string “4 – 2 + 3 + 4”, find all the parenthetical groupings of exactly two subexpressions at a time for which the final expression evaluates to a positive number.
Dubbed the “Chip and Flip Show,” the competition was hosted by our high-energy emcees, Chip Hurst and Flip Phillips, who kept the live audience engaged and laughing. We had an incredible first-place finish by Gerli Jogeva! She brought home the win with a commanding 14 points.
Rounding out the top three place finishers, Carlo Barbieri and Sander Huisman came in second and third. Congratulations to our winners, and thank you to all who participated in making this another impressive and truly fun year of livecoding!
If you didn’t catch the championship live, you can relive the excitement and creativity in our archived livestream video of the event.
So Much More
Throughout the conference, our senior developers stopped by our Tech Talk booth to chat about what projects they’re excited about and what their teams are working hard on. We had a student poster session, demo booths for hands-on engagement, an expert panel, topical meet-ups and roundtables, another iteration of the infamous One-Liner Competition and so much more.
See You Next Year!
And that’s the end of the 2019 Wolfram Technology Conference. We want to thank all our presenters and attendees, because you are the ones who make our incredible community what it is. If you couldn’t make it out this year, we hope you’ve enjoyed the livestreamed portions of the conference—keep an eye out for all the archived videos coming soon to the Wolfram YouTube channel. We can’t wait to see what awaits us next year!
Mark your calendars now and save the date for our next Wolfram Technology Conference, October 6–9, 2020!
Êîììåíòàðèè (2)
Today marks the start of our annual Wolfram Technology Conference. We’re looking forward to hearing about the latest innovations in computation from our Wolfram technology users! The conference is a great way to network with other users and find out what’s new at Wolfram and in our community. If you aren’t attending this year, you can still connect with the conference through our broadcast events.
Opening Keynote
We’ll kick off the conference Monday evening with Stephen Wolfram’s keynote. Our CEO will talk about the state of Wolfram technology, sharing new directions and future goals for the company. Tune in on Twitch, Facebook Live or YouTube Live Monday, October 28 starting at 6pm CT to see this presentation live.
Tech Talk Interviews
Throughout the week, we’ll also be streaming several Tech Talks featuring interviews with Wolfram employees about their latest projects. These talks give a taste of the topics discussed at the conference, from medical imaging to blockchain, as well as updates and previews of Wolfram Language and Wolfram SystemModeler features. You can catch these talks on YouTube Live or Facebook Live throughout Tuesday and Wednesday, October 29–30. In the meantime, you can take a look at last year’s talks:
Livecoding Championship
On Wednesday evening, we’ll be holding our fourth annual Livecoding Championship, a chance for talented and creative Wolfram Language users to compete for various prizes by showing off their coding prowess. Contestants will test their abilities with challenges in visualization, audio processing and a range of other computational areas. Anyone at the conference is free to compete—and the turnout gets more diverse and impressive every year! Everyone else can check out Twitch, Facebook Live or YouTube Live at 9pm CT on Wednesday, October 30 to watch the competition (along with running commentary). For a preview of the event, check out last year’s competition:
These streaming events are only a small part of what the Wolfram Technology Conference has to offer. If you like what you see, consider joining us in person next time. Maybe you could be the next Livecoding Champion!