 1. Version 12 Launches Today! (And It’s a Big Jump for Wolfram Language and Mathematica)Âò., 16 àïð.[−]
a.twitch3 {
display: inlineblock;
width: 200px;
height: 168px;
background: url(https://blog.stephenwolfram.com/data/uploads/2019/04/livestreamicon.png) norepeat;
float: right;
marginleft: 10px;
}
a.twitch3:hover {
backgroundposition: 0px 168px;
backgroundcolor: inherit;
cursor: pointer;
}
The Road to Version 12
Today we re releasing Version 12 of Wolfram Language (and Mathematica) on desktop platforms, and in the Wolfram Cloud. We released Version 11.0 in August 2016, 11.1 in March 2017, 11.2 in September 2017 and 11.3 in March 2018. It s a big jump from Version 11.3 to Version 12.0. Altogether there are 278 completely new functions, in perhaps 103 areas, together with thousands of different updates across the system:
In an integer release like 12, our goal is to provide fullyfilledout new areas of functionality. But in every release we also want to deliver the latest results of our R D efforts. In 12.0, perhaps half of our new functions can be thought of as finishing areas that were started in previous .1 releases while half begin new areas. I ll discuss both types of functions in this piece, but I ll be particularly emphasizing the specifics of what s new in going from 11.3 to 12.0.
I must say that now that 12.0 is finished, I m amazed at how much is in it, and how much we ve added since 11.3. In my keynote at our Wolfram Technology Conference last October I summarized what we had up to that point and even that took nearly 4 hours. Now there s even more.
What we ve been able to do is a testament both to the strength of our R D effort, and to the effectiveness of the Wolfram Language as a development environment. Both these things have of course been building for three decades. But one thing that s new with 12.0 is that we ve been letting people watch our behindthescenes design process livestreaming more than 300 hours of my internal design meetings. So in addition to everything else, I suspect this makes Version 12.0 the very first major software release in history that s been open in this way.
OK, so what s new in 12.0? There are some big and surprising things notably in chemistry, geometry, numerical uncertainty and database integration. But overall, there are lots of things in lots of areas and in fact even the basic summary of them in the Documentation Center is already 19 pages long:
First, Some Math
Although nowadays the vast majority of what the Wolfram Language (and Mathematica) does isn t what s usually considered math, we still put immense R D effort into pushing the frontiers of what can be done in math. And as a first example of what we ve added in 12.0, here s the rather colorful ComplexPlot3D:
#10005
ComplexPlot3D[Gamma[z],{z,44I,4+4I}]
It s always been possible to write Wolfram Language code to make plots in the complex plane. But only now have we solved the math and algorithm problems that are needed to automate the process of robustly plotting even quite pathological functions in the complex plane.
Years ago I remember painstakingly plotting the dilogarithm function, with its real and imaginary parts. Now ReImPlot just does it:
#10005
ReImPlot[PolyLog[2, x], {x, 4, 4}]
The visualization of complex functions is (pun aside) a complex story, with details making a big difference in what one notices about a function. And so one of the things we ve done in 12.0 is to introduce carefully selected standardized ways (such as named color functions) to highlight different features:
#10005
ComplexPlot[(z^2+1)/(z^21),{z,22I,2+2I},ColorFunction>"CyclicLogAbsArg"]
The Calculus of Uncertainty
Measurements in the real world often have uncertainty that gets represented as values with ± errors. We ve had addon packages for handling numbers with errors for ages. But in Version 12.0 we re building in computation with uncertainty, and we re doing it right.
The key is the symbolic object Around[x, ], which represents a value around x , with uncertainty ?:
#10005
Around[7.1,.25]
You can do arithmetic with Around, and there s a whole calculus for how the uncertainties combine:
#10005
Sqrt[Around[7.1,.25]]+Around[1,.1]
If you plot Around numbers, they ll be shown with error bars:
#10005
ListPlot[Table[Around[n,RandomReal[Sqrt[n]]],{n,20}]]
There are lots of options—like here s one way to show uncertainty in both x and y:
#10005
ListPlot[Table[Around[RandomReal[10],RandomReal[1]],20,2],IntervalMarkers>"Ellipses"]
You can have Around quantities:
#10005
1/Around[Quantity[3, "Metres"], Quantity[3.5, "Centimetres"]]
And you can also have symbolic Around objects:
#10005
Around[x,Subscript[?, x]]+Around[y,Subscript[?, y]]
But what really is an Around object? It s something where there are certain rules for combining uncertainties, that are based on uncorrelated normal distributions. But there s no statement being made that Around[x, ] represents anything that actually in detail follows a normal distribution any more than that Around[x, ] represents a number specifically in the interval defined by Interval[{x  , x + }]. It s just that Around objects propagate their errors or uncertainties according to consistent general rules that successfully capture what s typically done in experimental science.
OK, so let s say you make a bunch of measurements of some value. You can get an estimate of the value together with its uncertainty using MeanAround (and, yes, if the measurements themselves have uncertainties, these will be taken into account in weighting their contributions):
#10005
MeanAround[{1.4,1.7,1.8,1.2,1.5,1.9,1.7,1.3,1.7,1.9,1.0,1.7}]
Functions all over the system notably in machine learning are starting to have the option ComputeUncertainty True, which makes them give Around objects rather than pure numbers.
Around might seem like a simple concept, but it s full of subtleties which is the main reason it s taken until now for it to get into the system. Many of the subtleties revolve around correlations between uncertainties. The basic idea is that the uncertainty of every Around object is assumed to be independent. But sometimes one has values with correlated uncertainties and so in addition to Around, there s also VectorAround, which represents a vector of potentially correlated values with a specified covariance matrix.
There s even more subtlety when one s dealing with things like algebraic formulas. If one replaces x here with an Around, then, following the rules of Around, each instance is assumed to be uncorrelated:
#10005
(Exp[x]+Exp[x/2])/.x>Around[0,.3]
But probably one wants to assume here that even though the value of x may be uncertain, it s going to be same for each instance, and one can do this using the function AroundReplace (notice the result is different):
#10005
AroundReplace[Exp[x]+Exp[x/2],x>Around[0,.3]]
There s lots of subtlety in how to display uncertain numbers. Like how many trailing 0s should you put in:
#10005
Around[1,.0006]
Or how much precision of the uncertainty should you include (there’s a conventional breakpoint when the trailing digits are 35):
#10005
{Around[1.2345,.000312],Around[1.2345,.00037]}
In rare cases where lots of digits are known (think, for example, some physical constants), one wants to go to a different way to specify uncertainty:
#10005
Around[1.23456789,.000000001]
And it goes on and on. But gradually Around is going to start showing up all over the system. By the way, there are lots of other ways to specify Around numbers. This is a number with 10% relative error:
#10005
Around[2,Scaled[.1]]
This is the best Around can do in representing an interval:
#10005
Around[Interval[{2,3}]]
For a distribution, Around computes variance:
#10005
Around[NormalDistribution[2,1]]
It can also take into account asymmetry by giving asymmetric uncertainties:
#10005
Around[LogNormalDistribution[2,1]]
Classic Math, Elementary and Advanced
In making math computational, it s always a challenge to both be able to get everything right , and not to confuse or intimidate elementary users. Version 12.0 introduces several things to help. First, try solving an irreducible quintic equation:
#10005
Solve[x^5 + 6 x + 1 == 0, x]
In the past, this would have shown a bunch of explicit Root objects. But now the Root objects are formatted as boxes showing their approximate numerical values. Computations work exactly the same, but the display doesn t immediately confront people with having to know about algebraic numbers.
When we say Integrate, we mean find an integral , in the sense of an antiderivative. But in elementary calculus, people want to see explicit constants of integration (as they always have in WolframAlpha), so we added an option for that (and C[n] also has a nice, new output form):
#10005
Integrate[x^3,x,GeneratedParameters>C]
When we benchmark our symbolic integration capabilities we do really well. But there s always more that can be done, particularly in terms of finding the simplest forms of integrals (and at a theoretical level this is an inevitable consequence of the undecidability of symbolic expression equivalence). In Version 12.0 we ve continued to pick away at the frontier, adding cases like:
#10005
\[Integral]Sqrt[
Sqrt[x] + Sqrt[2 x + 2 Sqrt[x] + 1] + 1] \[DifferentialD]x
#10005
\[Integral]x^2/(ProductLog[a/x] + 1) \[DifferentialD]x
In Version 11.3 we introduced asymptotic analysis, being able to find asymptotic values of integrals and so on. Version 12.0 adds asymptotic sums, asymptotic recurrences and asymptotic solutions to equations:
#10005
AsymptoticSum[1/Sqrt[k], {k, 1, n}, {n, \[Infinity], 5}]
#10005
AsymptoticSolve[x y^4  (x + 1) y^2 + x == 1, y, {x, 0, 3}, Reals]
One of the great things about making math computational is that it gives us new ways to explain math itself. And something we ve been doing is to enhance our documentation so that it explains the math as well as the functions. For example, here s the beginning of the documentation about Limit with diagrams and examples of the core mathematical ideas:
More with Polygons
Polygons have been part of the Wolfram Language since Version 1. But in Version 12.0 they re getting generalized: now there s a systematic way to specify holes in them. A classic geographic use case is the polygon for South Africa with its hole for the country of Lesotho.
In Version 12.0, much like Root, Polygon gets a convenient new display form:
#10005
RandomPolygon[20]
You can compute with it just as before:
#10005
Area[%]
RandomPolygon is new too. You can ask, say, for 5 random convex polygons, each with 10 vertices, in 3D:
#10005
Graphics3D[RandomPolygon[3>{"Convex",10},5]]
There are lots of new operations on polygons. Like PolygonDecomposition, which can, for example, decompose a polygon into convex parts:
#10005
RandomPolygon[8]
#10005
PolygonDecomposition[%, "Convex"]
Polygons with holes introduce a need for other kinds of operations too, like OuterPolygon, SimplePolygonQ, and CanonicalizePolygon.
Computing with Polyhedra
Polygons are pretty straightforward to specify: you just give their vertices in order (and if they have holes, you also give the vertices for the holes). Polyhedra are a bit more complicated: in addition to giving the vertices, you have to say how these vertices form faces. But in Version 12.0, Polyhedron lets you do this in considerable generality, including voids (the 3D analog of holes), etc.
But first, recognizing their 2000+ years of history, Version 12.0 introduces functions for the five Platonic solids:
#10005
Graphics3D[Dodecahedron[]]
And given the Platonic solids, one can immediately start computing with them:
#10005
Volume[Dodecahedron[]]
Here s the solid angle subtended at vertex 1 (since it s Platonic, all the vertices give the same angle):
#10005
PolyhedronAngle[Dodecahedron[],1]
Here s an operation done on the polyhedron:
#10005
Graphics3D[BeveledPolyhedron[Dodecahedron[],1]]
#10005
Volume[DualPolyhedron[BeveledPolyhedron[Dodecahedron[],1]]]
Beyond the Platonic solids, Version 12 also builds in all the uniform polyhedra (n edges and m faces meet at each vertex) and you can also get symbolic Polyhedron versions of named polyhedra from PolyhedronData:
#10005
Graphics3D[AugmentedPolyhedron[PolyhedronData["Spikey","Polyhedron"],2]]
You can make any polyhedron (including a random one, with RandomPolyhedron), then do whatever computations you want on it:
#10005
RegionUnion[Dodecahedron[{0,0,0}],Dodecahedron[{1,1,1}]]
#10005
SurfaceArea[%]
EuclidStyle Geometry Made Computable
Mathematica and the Wolfram Language are very powerful at doing both explicit computational geometry and geometry represented in terms of algebra. But what about geometry the way it s done in Euclid s Elements in which one makes geometric assertions and then sees what their consequences are?
Well, in Version 12, with the whole tower of technology we ve built, we re finally able to deliver a new style of mathematical computation that in effect automates what Euclid was doing 2000+ years ago. A key idea is to introduce symbolic geometric scenes that have symbols representing constructs such as points, and then to define geometric objects and relations in terms of them.
For example, here s a geometric scene representing a triangle a, b, c, and a circle through a, b and c, with center o, with the constraint that o is at the midpoint of the line from a to c:
#10005
GeometricScene[{a,b,c,o},{Triangle[{a,b,c}],CircleThrough[{a,b,c},o],o==Midpoint[{a,c}]}]
On its own, this is just a symbolic thing. But we can do operations on it. For example, we can ask for a random instance of it, in which a, b, c and o are made specific:
#10005
RandomInstance[GeometricScene[{a,b,c,o},{Triangle[{a,b,c}],CircleThrough[{a,b,c},o],o==Midpoint[{a,c}]}]]
You can generate as many random instances as you want. We try to make the instances as generic as possible, with no coincidences that aren t forced by the constraints:
#10005
RandomInstance[GeometricScene[{a,b,c,o},{Triangle[{a,b,c}],CircleThrough[{a,b,c},o],o==Midpoint[{a,c}]}],3]
OK, but now let s play Euclid , and find geometric conjectures that are consistent with our setup:
#10005
FindGeometricConjectures[GeometricScene[{a,b,c,o},{Triangle[{a,b,c}],CircleThrough[{a,b,c},o],o==Midpoint[{a,c}]}]]
For a given geometric scene, there may be many possible conjectures. We try to pick out the interesting ones. In this case we come up with two and what s illustrated is the first one: that the line ba is perpendicular to the line cb. As it happens, this result actually appears in Euclid (it s in Book 3, as part of Proposition 31) though it s usually called Thales’s theorem.
In 12.0, we now have a whole symbolic language for representing typical things that appear in Euclidstyle geometry. Here s a more complex situation corresponding to what s called Napoleon’s theorem:
#10005
RandomInstance[
GeometricScene[{"C", "B", "A", "C'", "B'", "A'", "Oc", "Ob",
"Oa"}, {Triangle[{"C", "B", "A"}],
TC == Triangle[{"A", "B", "C'"}], TB == Triangle[{"C", "A", "B'"}],
TA == Triangle[{"B", "C", "A'"}],
GeometricAssertion[{TC, TB, TA}, "Regular"],
"Oc" == TriangleCenter[TC, "Centroid"],
"Ob" == TriangleCenter[TB, "Centroid"],
"Oa" == TriangleCenter[TA, "Centroid"],
Triangle[{"Oc", "Ob", "Oa"}]}]]
In 12.0 there are lots of new and useful geometric functions that work on explicit coordinates:
#10005
CircleThrough[{{0,0},{2,0},{0,3}}]
#10005
TriangleMeasurement[Triangle[{{0,0},{1,2},{3,4}}],"Inradius"]
For triangles there are 12 types of centers supported, and, yes, there can be symbolic coordinates:
#10005
TriangleCenter[Triangle[{{0,0},{1,2},{3,y}}],"NinePointCenter"]
And to support setting up geometric statements we also need geometric assertions . In 12.0 there are 29 different kinds such as "Parallel", "Congruent", "Tangent", "Convex", etc. Here are three circles asserted to be pairwise tangent:
#10005
RandomInstance[GeometricScene[{a,b,c},{GeometricAssertion[{Circle[a],Circle[b],Circle[c]},"PairwiseTangent"]}]]
Going SuperSymbolic with Axiomatic Theories
Version 11.3 introduced FindEquationalProof for generating symbolic representations of proofs. But what axioms should be used for these proofs? Version 12.0 introduces AxiomaticTheory, which gives axioms for various common axiomatic theories.
Here s my personal favorite axiom system:
#10005
AxiomaticTheory["WolframAxioms"]
What does this mean? In a sense it s a more symbolic symbolic expression than we re used to. In something like 1 + x we don’t say what the value of x is, but we imagine that it can have a value. In the expression above, a, b and c are pure formal symbols that serve an essentially structural role, and can t ever be thought of as having concrete values.
What about the · (center dot)? In 1 + x we know what + means. But the · is intended to be a purely abstract operator. The point of the axiom is in effect to define a constraint on what · can represent. In this particular case, it turns out that the axiom is an axiom for Boolean algebra, so that · can represent Nand and Nor. But we can derive consequences of the axiom completely formally, for example with FindEquationalProof:
#10005
FindEquationalProof[p·q==q·p,AxiomaticTheory["WolframAxioms"]]
There s quite a bit of subtlety in all of this. In the example above, it s useful to have · as the operator, not least because it displays nicely. But there’s no builtin meaning to it, and AxiomaticTheory lets you give something else (here f) as the operator:
#10005
AxiomaticTheory[{"WolframAxioms",f>}]
What s the Nand doing there? It s a name for the operator (but it shouldn t be interpreted as anything to do with the value of the operator). In the axioms for group theory, for example, several operators appear:
#10005
AxiomaticTheory["GroupAxioms"]
This gives the default representations of the various operators here:
#10005
AxiomaticTheory["GroupAxioms","Operators"]
AxiomaticTheory knows about notable theorems for particular axiomatic systems:
#10005
AxiomaticTheory["GroupAxioms","NotableTheorems"]
The basic idea of formal symbols was introduced in Version 7, for doing things like representing dummy variables in generated constructs like these:
#10005
PDF[NormalDistribution[0,1]]
#10005
Sum[2^n n!, n]
#10005
Entity["Surface", "Torus"][EntityProperty["Surface", "AlgebraicEquation"]]
You can enter a formal symbol using \[FormalA] or Esc.aEsc, etc. But back in Version 7, \[FormalA] was rendered as a. And that meant the expression above looked like:
Function[{\[FormalA], \[FormalC]},
Function[{\[FormalX], \[FormalY], \[FormalZ]}, \[FormalA]^4 
2 \[FormalA]^2 \[FormalC]^2 + \[FormalC]^4 
2 \[FormalA]^2 \[FormalX]^2 
2 \[FormalC]^2 \[FormalX]^2 + \[FormalX]^4 
2 \[FormalA]^2 \[FormalY]^2  2 \[FormalC]^2 \[FormalY]^2 +
2 \[FormalX]^2 \[FormalY]^2 + \[FormalY]^4 
2 \[FormalA]^2 \[FormalZ]^2 + 2 \[FormalC]^2 \[FormalZ]^2 +
2 \[FormalX]^2 \[FormalZ]^2 +
2 \[FormalY]^2 \[FormalZ]^2 + \[FormalZ]^4]]
I always thought this looked incredibly complicated. And for Version 12 we wanted to simplify it. We tried many possibilities, but eventually settled on single gray underdots which I think look much better.
In AxiomaticTheory, both the variables and the operators are purely symbolic . But one thing that s definite is the arity of each operator, which one can ask AxiomaticTheory:
#10005
AxiomaticTheory["BooleanAxioms"]
#10005
AxiomaticTheory["BooleanAxioms","OperatorArities"]
Conveniently, the representation of operators and arities can immediately be fed into Groupings, to get possible expressions involving particular variables:
#10005
Groupings[{a,b},{CircleTimes > 2, CirclePlus > 2, OverBar > 1}]
The nBody Problem
Axiomatic theories represent a classic historical area for mathematics. Another classical historical area much more on the applied side is the nbody problem. Version 12.0 introduces NBodySimulation, which gives simulations of the nbody problem. Here s a threebody problem (think EarthMoonSun) with certain initial conditions (and inversesquare force law):
#10005
NBodySimulation["InverseSquare",{1,"Position">{0,0},"Velocity">{0,.5}>,
1,"Position">{1,1},"Velocity">{0,.5}>,
1,"Position">{0,1},"Velocity">{0,0}>},4]
You can ask about various aspects of the solution; this plots the positions as a function of time:
#10005
ParametricPlot[Evaluate[%[All, "Position", t]], {t, 0, 4}]
Underneath, this is just solving differential equations, but a bit like SystemModel NBodySimulation provides a convenient way to set up the equations and handle their solutions. And, yes, standard force laws are built in, but you can define your own.
Language Extensions Conveniences
We ve been polishing the core of the Wolfram Language for more than 30 years now, and in each successive version we end up introducing some new extensions and conveniences.
We ve had the function Information ever since Version 1.0, but in 12.0 we ve greatly extended it. It used to just give information about symbols (although that s been modernized as well):
#10005
Information[Sin]
But now it also gives information about lots of kinds of objects. Here s information on a classifier:
#10005
Information[Classify["NotablePerson"]]
Here s information about a cloud object:
#10005
Information[CloudPut[100!]]
Hover over the labels in the information box and you can find out the names of the corresponding properties:
#10005
Information[CloudPut[100!],"FileHashMD5"]
For entities, Information gives a summary of known property values:
#10005
Information[Entity["Element", "Tungsten"]]
Over the past few versions, we ve introduced a lot of new summary display forms. In Version 11.3 we introduced Iconize, which is essentially a way of creating a summary display form for anything. Iconize has proved to be even more useful than we originally anticipated. It s great for hiding unnecessary complexity both in notebooks and in pieces of Wolfram Language code. In 12.0 we ve redesigned how Iconize displays, particularly to make it read nicely inside expressions and code.
You can explicitly iconize something:
#10005
{a,b,Iconize[Range[10]]}
Press the + and you ll see some details:
Press and you ll get the original expression again:
If you have lots of data you want to reference in a computation, you can always store it in a file, or in the cloud (or even in a data repository). It s usually more convenient, though, to just put it in your notebook, so you have everything in the same place. One way to avoid the data taking over your notebook is to put in closed cells. But Iconize provides a much more flexible and elegant way to do this.
When you re writing code, it s often convenient to iconize in place . The rightclick menu now lets you do that:
#10005
Plot[Sin[x], {x, 0, 10}, PlotStyle > Red, Filling > Axis,
FillingStyle > LightYellow]
Talking of display, here s something small but convenient that we added in 12.0:
#10005
PercentForm[0.3]
And here are a couple of other number conveniences that we added:
#10005
NumeratorDenominator[11/4]
#10005
MixedFractionParts[11/4]
Functional programming has always been a central part of the Wolfram Language. But we re continually looking to extend it, and to introduce new, generally useful primitives. An example in Version 12.0 is SubsetMap:
#10005
SubsetMap[Reverse, {a, b, c, xxx, yyy, zzz}, {2, 5}]
#10005
SubsetMap[Reverse@*Map[f], {a, b, c, xxx, yyy, zzz}, {2, 5}]
Functions are normally things that can take several inputs, but always give a single piece of output. In areas like quantum computing, however, one’s interested instead in having inputs and outputs. SubsetMap effectively implements functions, picking up inputs from specified positions in a list, applying some operation to them, then putting back the results at the same positions.
I started formulating what s now SubsetMap about a year ago. And I quickly realized that actually I could really have used this function in all sorts of places over the years. But what should this particular lump of computational work be called? My initial working name was ArrayReplaceFunction (which I shortened to ARF in my notes). In a sequence of (livestreamed) meetings we went back and forth. There were ideas like ApplyAt (but it s not really Apply) and MutateAt (but it s not doing mutation in the lvalue sense), as well as RewriteAt, ReplaceAt, MultipartApply and ConstructInPlace. There were ideas about curried function decorator forms, like PartAppliedFunction, PartwiseFunction, AppliedOnto, AppliedAcross and MultipartCurry.
But somehow when we explained the function we kept on coming back to talking about how it was operating on a subset of a list, and how it was really like Map, except that it was operating on multiple elements at a time. So finally we settled on the name SubsetMap. And in yet another reinforcement of the importance of language design it s remarkable how, once one has a name for something like this, one immediately finds oneself able to reason about it, and see where it can be used.
More Machine Learning Superfunctions
For many years we ve worked hard to make the Wolfram Language the highestlevel and most automated system for doing stateoftheart machine learning. Early on, we introduced the superfunctions Classify and Predict that do classification and prediction tasks in a completely automated way, automatically picking the best approach for the particular input given. Along the way, we’ve introduced other superfunctions like SequencePredict, ActiveClassification and FeatureExtract.
In Version 12.0 we ve got several important new machine learning superfunctions. There s FindAnomalies, which finds anomalous elements in data:
#10005
FindAnomalies[{1.2, 2.5, 3.2, 107.6, 4.6, 5, 5.1, 204.2}]
Along with this, there’s DeleteAnomalies, which deletes elements it considers anomalous:
#10005
DeleteAnomalies[{1.2, 2.5, 3.2, 107.6, 4.6, 5, 5.1, 204.2}]
There s also SynthesizeMissingValues, which tries to generate plausible values for missing pieces of data:
#10005
SynthesizeMissingValues[{{1.1,1.4},{2.3,3.1},{3,4},{Missing[],5.4},{8.7,7.5}}]
How do these functions work? They re all based on a new function called LearnDistribution, which tries to learn the underlying distribution of data, given a certain set of examples. If the examples were just numbers, this would essentially be a standard statistics problem, for which we could use something like EstimatedDistribution. But the point about LearnDistribution is that it works with data of any kind, not just numbers. Here it is learning an underlying distribution for a collection of colors:
#10005
dist = LearnDistribution[{RGBColor[0.5172966964096541,
0.4435322033449375, 1.],
RGBColor[0.3984626930847484, 0.5592892024442906, 1.],
RGBColor[0.6149389612362844, 0.5648721294502163, 1.],
RGBColor[0.4129156497559272, 0.9146065592632544, 1.],
RGBColor[0.7907065846445507, 0.41054133291260947`, 1.],
RGBColor[0.4878854162550912, 0.9281119680196579, 1.],
RGBColor[0.9884362181280959, 0.49025178842859785`, 1.],
RGBColor[0.633242503827218, 0.9880985331612835, 1.],
RGBColor[0.9215182482568276, 0.8103084921468551, 1.],
RGBColor[0.667469513641223, 0.46420827644204676`, 1.]}]
Once we have this learned distribution , we can do all sorts of things with it. For example, this generates 20 random samples from it:
#10005
RandomVariate[dist,20]
But now think about FindAnomalies. What it has to do is to find out which data points are anomalous relative to what’s expected. Or, in other words, given the underlying distribution of the data, it finds what data points are outliers, in the sense that they should occur only with very low probability according to the distribution.
And just like for an ordinary numerical distribution, we can compute the PDF for a particular piece of data. Purple is pretty likely given the distribution of colors we ve learned from our examples:
#10005
PDF[dist, RGBColor[
0.6323870562875563, 0.3525878887878987, 1.0002083564175581`]]
But red is really really unlikely:
#10005
PDF[dist, RGBColor[1, 0, 0]]
For ordinary numerical distributions, there are concepts like CDF that tell us cumulative probabilities, say that we’ll get results that are further out than a particular value. For spaces of arbitrary things, there isn t really a notion of further out . But we ve come up with a function we call RarerProbability, that tells us what the total probability is of generating an example with a smaller PDF than something we give:
#10005
RarerProbability[dist, RGBColor[
0.6323870562875563, 0.3525878887878987, 1.0002083564175581`]]
#10005
RarerProbability[dist, RGBColor[1, 0, 0]]
Now we ve got a way to describe anomalies: they re just data points that have a very small rarer probability. And in fact FindAnomalies has an option AcceptanceThreshold (with default value 0.001) that specifies what should count as very small .
OK, but let s see this work on something more complicated than colors. Let s train an anomaly detector by looking at 1000 examples of handwritten digits:
#10005
AnomalyDetection[RandomSample[ResourceData["MNIST"][[All,1]],1000]]
Now FindAnomalies can tell us which examples are anomalous:
FindAnomalies[AnomalyDetection[RandomSample[ResourceData["MNIST"][[All,1]],1000]], {\!\(\*
GraphicsBox[
TagBox[RasterBox[CompressedData["
1:eJxTTMoPSmNiYGAo5gASQYnljkVFiZXBAkBOaF5xZnpeaopnXklqemqRRRJI
mQwU/x+84O9URsb6P1ilPk1jAoLzWOUymJiEcchNY2Srm80kcAObHC9z1/8w
Jm9sUh0sWf+/2DItxyJ1T5Cp9f8tJqbDWOTmMgHlinDK8UpyMVn+xCL3K4iJ
Eei7TdicAgT2jIyFOKT+5zGJ38YhtYiRtR6H1CtuRkNcJlozMa/BIfVYiMkA
h9QjAyatF9gkrqo2GjDpPMeq6RzQ0zrPsBv4NI4p+AcuN1ITAABxtMfa
"], {{0,
28}, {28, 0}}, {0, 255},
ColorFunction>GrayLevel],
BoxForm`ImageTag[
"Byte", ColorSpace > Automatic, Interleaving > None],
Selectable>False],
DefaultBaseStyle>"ImageGraphics",
ImageSizeRaw>{28, 28},
PlotRange>{{0, 28}, {0, 28}}]\), \!\(\*
GraphicsBox[
TagBox[RasterBox[CompressedData["
1:eJxTTMoPSmNiYGAo5gASQYnljkVFiZXBAkBOaF5xZnpeaopnXklqemqRRRJI
mQwU/x/kgJGJcTUOqV4mFqY12KWKmBiZZI9jlwPqYsEu9ciKgYnRGrsuK6Au
68e4dDEw4dbFVIpdFyNeu7D77NEqoC6mXhLt+n8Mt79C5XGGYhhuf4F14bAL
t7+OyeMKw///LYH+wi7z//9jayYWXHLUBgCB+cHS
"], {{0, 28}, {28, 0}}, {0,
255},
ColorFunction>GrayLevel],
BoxForm`ImageTag[
"Byte", ColorSpace > Automatic, Interleaving > None],
Selectable>False],
DefaultBaseStyle>"ImageGraphics",
ImageSizeRaw>{28, 28},
PlotRange>{{0, 28}, {0, 28}}]\), \!\(\*
GraphicsBox[
TagBox[RasterBox[CompressedData["
1:eJxTTMoPSmNiYGAo5gASQYnljkVFiZXBAkBOaF5xZnpeaopnXklqemqRRRJI
mQwU/x/M4I7MI1SBfL2vMOYpxsuocqGMd2DMLehyzoy9MKYTulwWYwuMKY0p
dxanXCJCQJrtBqqcOWMXlPVLUhdVCmim76qdm+fNu76wktHr27dvyHLtjChA
GFnuZbkTI6NiQIB/ABvDhOXn0Ez9+/37LxAtJPDsPy4gZIZT6gZnC065HYyn
cMr1IQIWAyQy+q/ELSd7FbfcBNxmir/DKUcVAADomc0b
"], {{0, 28}, {28, 0}}, {
0, 255},
ColorFunction>GrayLevel],
BoxForm`ImageTag[
"Byte", ColorSpace > Automatic, Interleaving > None],
Selectable>False],
DefaultBaseStyle>"ImageGraphics",
ImageSizeRaw>{28, 28},
PlotRange>{{0, 28}, {0, 28}}]\), \!\(\*
GraphicsBox[
TagBox[RasterBox[CompressedData["
1:eJxTTMoPSmNiYGAo5gASQYnljkVFiZXBAkBOaF5xZnpeaopnXklqemqRRRJI
mQwUDwZwxxCnVOZfnHJ8T75rYAiKC4Cp2n+rMKRcXm8E099+GaNLsW/7OQ9E
2/9ZhqFtwT8HEMV07J8RulT7v1WMINru3xsBNKmw339cQDTroX/FaFLSD/9N
BzOU/n1gR5Mz+vdeBESz7P2XwsAujSIn+/zfxZychqO9//7dOHoRzX8xf/5B
wN9fi/250ExVCwWC4n8//TE8BwW6/97ikmJI+HcLl5Tc43+TcUhxrP33uxmH
nO+/P6W4jDz4bxNOl9AFAAAYpls0
"], {{0, 28}, {28, 0}}, {0, 255},
ColorFunction>GrayLevel],
BoxForm`ImageTag[
"Byte", ColorSpace > Automatic, Interleaving > None],
Selectable>False],
DefaultBaseStyle>"ImageGraphics",
ImageSizeRaw>{28, 28},
PlotRange>{{0, 28}, {0, 28}}]\), \!\(\*
GraphicsBox[
TagBox[RasterBox[CompressedData["
1:eJxTTMoPSmNiYGAo5gASQYnljkVFiZXBAkBOaF5xZnpeaopnXklqemqRRRJI
mQwU/x8UgKEep5QDbrn9DAz7SdC2vx6uDYtyGO2AKQU1CtO2/bAAxLStHqYa
05FAKTBwwLRtPwMSwHQHg0M9RDu6bRAZCAPd/fX1cLPRtSGZjaENydr9uOTw
pR88cvuxuBJJDqd19AAAMwi/NQ==
"], {{0, 28}, {28, 0}}, {0, 255},
ColorFunction>GrayLevel],
BoxForm`ImageTag[
"Byte", ColorSpace > Automatic, Interleaving > None],
Selectable>False],
DefaultBaseStyle>"ImageGraphics",
ImageSizeRaw>{28, 28},
PlotRange>{{0, 28}, {0, 28}}]\), \!\(\*
GraphicsBox[
TagBox[RasterBox[CompressedData["
1:eJxTTMoPSmNiYGAo5gASQYnljkVFiZXBAkBOaF5xZnpeaopnXklqemqRRRJI
mQwU/x+M4FkLV9+VK1fmZ2czqn5GlbotwwwETBDiOKrcPCYkuW1oZvZJOMWu
BgJrZiaJ92hyPx6DqSkCzMInsTrokAwTk/AybDLH6oH2WWPR9emYPCczE1fj
DwyZ9/tVwU6Uj9//BU3qoBIz3A+qaPbVMzExMjHJNU8p0hFgYij9jSy3Sl4t
48CBVyDm1UIm5lcoGj8ignAquhwCXHHFDBeYq3CFy9srSUxMTJjhcvbYxn51
kB+CMKSmcHGygPwnf/wzhpwbSIts8GrMIAO6gUktp+05DrdTEQAAo1CVcQ==
"], {{0, 28}, {28, 0}}, {0, 255},
ColorFunction>GrayLevel],
BoxForm`ImageTag[
"Byte", ColorSpace > Automatic, Interleaving > None],
Selectable>False],
DefaultBaseStyle>"ImageGraphics",
ImageSizeRaw>{28, 28},
PlotRange>{{0, 28}, {0, 28}}]\), \!\(\*
GraphicsBox[
TagBox[RasterBox[CompressedData["
1:eJxTTMoPSmNiYGAo5gASQYnljkVFiZXBAkBOaF5xZnpeaopnXklqemqRRRJI
mQwU/x/s4PcnHBKvNzqZCNX8wRD/8nmfPx8jEAjFo0udtVcAinNouWpp5aPL
WTIysoauvIvNpq0cIhpJ2B1xgpvR7zsOKWFGRhxufyLEyLj5H1apz9ZAF0rk
fsP01/8f9YwQEP0VTeZGqTojo+Xmfd0yjIy6P1HlUhk52yc+BrG6uTnPosol
MDKKN4K93MDPuA1V7m0qMBA5tUr0tRgZ1dAt/L/SGOIUBqU3mA79sn9KhrOz
c+0HrD4c3AAAH4+4UQ==
"], {{0, 28}, {28, 0}}, {0, 255},
ColorFunction>GrayLevel],
BoxForm`ImageTag[
"Byte", ColorSpace > Automatic, Interleaving > None],
Selectable>False],
DefaultBaseStyle>"ImageGraphics",
ImageSizeRaw>{28, 28},
PlotRange>{{0, 28}, {0, 28}}]\), \!\(\*
GraphicsBox[
TagBox[RasterBox[CompressedData["
1:eJxTTMoPSmNiYGAo5gASQYnljkVFiZXBAkBOaF5xZnpeaopnXklqemqRRRJI
mQwU/x+UoCeKkZGBkZGx4hmmHDsTFBg/wi3H1I0h186hbwkErExM2piGbnsP
JN7xYpUDgr9vzUAWYkpcqK3NB9u3BUMqghXqlKq/6FKdMFfqY/qvFu4Fz1fo
clOYmTSNjAyx27ei/Or//3+WcaHJ/UZi8qHKnbdtug6TckEzcx4Tk+xVMOuz
J1DKFsmY/3v5gZLX/v/fPUkeKCV7AsUZU4FCchYWPCBHStehOnG/EsxvzNpn
0N3/ygEiJbcIw2v//7/v7nYUKOq+hkWKugAABiF8Xw==
"], {{0, 28}, {28, 0}}, {
0, 255},
ColorFunction>GrayLevel],
BoxForm`ImageTag[
"Byte", ColorSpace > Automatic, Interleaving > None],
Selectable>False],
DefaultBaseStyle>"ImageGraphics",
ImageSizeRaw>{28, 28},
PlotRange>{{0, 28}, {0, 28}}]\), \!\(\*
GraphicsBox[
TagBox[RasterBox[CompressedData["
1:eJxTTMoPSmNiYGAo5gASQYnljkVFiZXBAkBOaF5xZnpeaopnXklqemqRRRJI
mQwU/x9GwAHim/376+sxpRgYHOoZwABdjgEJYJOrd6iv378fS1DtR6jC7Sg8
cvV45erxGEl1OWzeI8Ip+LU5kGMk0JX7ybHOgTwj0QEApknS3g==
"], {{0, 28}, {
28, 0}}, {0, 255},
ColorFunction>GrayLevel],
BoxForm`ImageTag[
"Byte", ColorSpace > Automatic, Interleaving > None],
Selectable>False],
DefaultBaseStyle>"ImageGraphics",
ImageSizeRaw>{28, 28},
PlotRange>{{0, 28}, {0, 28}}]\), \!\(\*
GraphicsBox[
TagBox[RasterBox[CompressedData["
1:eJxTTMoPSmNiYGAo5gASQYnljkVFiZXBAkBOaF5xZnpeaopnXklqemqRRRJI
mQwUD16Qe0EEp9yBfw045Vb924hTbtm/YJxyH/964ZY7j1Mq4F8/Trl6PHKb
yJbrwiNngEtK6CduOZF/N7hwy53DaZ3Ifzxy/1bgkcvHYyZuOd5DrjjlqAUA
H0Iyqg==
"], {{0, 28}, {28, 0}}, {0, 255},
ColorFunction>GrayLevel],
BoxForm`ImageTag[
"Byte", ColorSpace > Automatic, Interleaving > None],
Selectable>False],
DefaultBaseStyle>"ImageGraphics",
ImageSizeRaw>{28, 28},
PlotRange>{{0, 28}, {0, 28}}]\), \!\(\*
GraphicsBox[
TagBox[RasterBox[CompressedData["
1:eJxTTMoPSmNiYGAo5gASQYnljkVFiZXBAkBOaF5xZnpeaopnXklqemqRRRJI
mQwU/x+s4FJKpNVW7FLzeBgZGdnPYJPaz83IKcXIGIVF6q8ro+zlN6u5NBu/
YMidY2RfBaS2MjLOQpf67MiYDaL/KDLyP0WT62OUfQBmTGJkbECTC2PMh9qr
xMh4HkXqLovsTyjzDi/jORS5HsZkOFsMTS6csRfGvMfDegVZ6pU41w0Y25C7
HEXbOkZxKOtvC8tGVFduhMn97GZMQPNBCUyunpHxCppcG0Tu6yZWsdP/0OSe
mXAD3X1Jg1Hi/H8MUMgo1mUkyqq+HlPq/3JmYLzyVmCRAYLZVTbBP7FLURMA
AEeuuRo=
"], {{0, 28}, {28, 0}}, {0, 255},
ColorFunction>GrayLevel],
BoxForm`ImageTag[
"Byte", ColorSpace > Automatic, Interleaving > None],
Selectable>False],
DefaultBaseStyle>"ImageGraphics",
ImageSizeRaw>{28, 28},
PlotRange>{{0, 28}, {0, 28}}]\), \!\(\*
GraphicsBox[
TagBox[RasterBox[CompressedData["
1:eJxTTMoPSmNiYGAo5gASQYnljkVFiZXBAkBOaF5xZnpeaopnXklqemqRRRJI
mQwU/x9gcJ9hDy6pX9GMc3FI/bjAyPgWh9wrZUbeH7hsY2QMwmXdHUamaTik
PlgzsuHStpORMQKXnCuj0C8cUqdZGZVxaZvFyIjLJf/dGKU+4TKSjTERl7bt
eIz0YpR9h0PqPDNuI2czyj/GIfVRl9EVl7Y5jIwzccntl5X+jEuOTgAACjPm
MQ==
"], {{0, 28}, {28, 0}}, {0, 255},
ColorFunction>GrayLevel],
BoxForm`ImageTag[
"Byte", ColorSpace > Automatic, Interleaving > None],
Selectable>False],
DefaultBaseStyle>"ImageGraphics",
ImageSizeRaw>{28, 28},
PlotRange>{{0, 28}, {0, 28}}]\)}]
The Latest in Neural Networks
We first introduced our symbolic framework for constructing, exploring and using neural networks back in 2016, as part of Version 11. And in every version since then we ve added all sorts of stateoftheart features. In June 2018 we introduced our Neural Net Repository to make it easy to access the latest neural net models from the Wolfram Language and already there are nearly 100 curated models of many different types in the repository, with new ones being added all the time.
So if you need the latest BERT transformer neural network (that was added today!), you can get it from NetModel:
#10005
NetModel["BERT Trained on BookCorpus and English Wikipedia Data"]
You can open this up and see the network that s involved (and, yes, we’ve updated the display of net graphs for Version 12.0):
And you can immediately use the network, here to produce some kind of meaning features array:
#10005
NetModel["BERT Trained on BookCorpus and English Wikipedia Data"][
"What a wonderful network!"] // MatrixPlot
In Version 12.0 we ve introduced several new layer types notably AttentionLayer, which lets one set up the latest transformer architectures and we’ve enhanced our neural net functional programming capabilities, with things like NetMapThreadOperator, and multiplesequence NetFoldOperator. In addition to these insidethenet enhancements, Version 12.0 adds all sorts of new NetEncoder and NetDecoder cases, such as BPE tokenization for text in hundreds of languages, and the ability to include custom functions for getting data into and out of neural nets.
But some of the most important enhancements in Version 12.0 are more infrastructural. NetTrain now supports multiGPU training, as well as dealing with mixedprecision arithmetic, and flexible earlystopping criteria. We re continuing to use the popular MXNet lowlevel neural net framework (to which we ve been major contributors) so we can take advantage of the latest hardware optimizations. There are new options for seeing what s happening during training, and there s also NetMeasurements that allows you to make 33 different types of measurements on the performance of a network:
NetMeasurements[NetModel["LeNet Trained on MNIST Data"], {\!\(\*
GraphicsBox[
TagBox[RasterBox[CompressedData["
1:eJxTTMoPSmNiYGAo5gASQYnljkVFiZXBAkBOaF5xZnpeaopnXklqemqRRRJI
mQwU/x9YUI/HAQ4M+3HKMTDU4zYSt5wDA24z8QUGHjmgdQ54tOFySj0eIx3w
+ICAkftxa8NpHR4jCXicrECpxxPO+3G7hE4AAARG3ZY=
"], {{0, 28}, {28, 0}}, {
0, 255},
ColorFunction>GrayLevel],
BoxForm`ImageTag[
"Byte", ColorSpace > Automatic, Interleaving > None],
Selectable>False],
DefaultBaseStyle>"ImageGraphics",
ImageSizeRaw>{28, 28},
PlotRange>{{0, 28}, {0, 28}}]\) > 1, \!\(\*
GraphicsBox[
TagBox[RasterBox[CompressedData["
1:eJxTTMoPSmNiYGAo5gASQYnljkVFiZXBAkBOaF5xZnpeaopnXklqemqRRRJI
mQwU/x964N8LDwZGxtQ72OROMvJOKA9glLmJKXWRVWTn//8fuhkljqBLfZZn
fQyiTzExWl5Hk/Nn1AHTmxgZGd2/ocopMn4E0z9NGbnT/6BIvRMzggg8VmDq
RjPyHOMsEPV7tRyjH7pTVjOeA8pcjWJk1DiIIcem2NygD3QHH4bU//9NYoyi
kQv4GDsxpYCuefz2nQJj3V9scv///wpk9MYh9W8mo/wH7FL/rzDynsAh9Vqa
uR2H1BdFpnxcUoaMoTik/ocxOv3BIfVcEBRmVAIAcZ7Grw==
"], {{0, 28}, {28,
0}}, {0, 255},
ColorFunction>GrayLevel],
BoxForm`ImageTag[
"Byte", ColorSpace > Automatic, Interleaving > None],
Selectable>False],
DefaultBaseStyle>"ImageGraphics",
ImageSizeRaw>{28, 28},
PlotRange>{{0, 28}, {0, 28}}]\) > 9, \!\(\*
GraphicsBox[
TagBox[RasterBox[CompressedData["
1:eJxTTMoPSmNiYGAo5gASQYnljkVFiZXBAkBOaF5xZnpeaopnXklqemqRRRJI
mQwU/x8Q8C2ckTEiPz//DRapaCYmKUkmJqZ7mHJ3mZjMUlOnpqa+xpR7r2X2
Fad9sfm43dKDR86Pq6F1dV9RpqpKa+v7b2hyTBCgIQIkxLuuIMtt4Wdikm3e
u/fbLP9VtcpMnA+QJZ9c3fEKyvxzxYtJ9ChO6ycz7sUp99vd6gVOyRQFLCEE
AV9YRR/hkrvNVIrEu1O2bNkFOC+eaSmS3De3UBkeccvWw58/v2qNZ/a5i2pQ
FyswTBjAwSNzA92Ww6vElBlBUrKXsbjh+Rv+3nv37r3CIkUPAAABtrX9
"], {{0,
28}, {28, 0}}, {0, 255},
ColorFunction>GrayLevel],
BoxForm`ImageTag[
"Byte", ColorSpace > Automatic, Interleaving > None],
Selectable>False],
DefaultBaseStyle>"ImageGraphics",
ImageSizeRaw>{28, 28},
PlotRange>{{0, 28}, {0, 28}}]\) > 5, \!\(\*
GraphicsBox[
TagBox[RasterBox[CompressedData["
1:eJxTTMoPSmNiYGAo5gASQYnljkVFiZXBAkBOaF5xZnpeaopnXklqemqRRRJI
mQwU/6cx+OTNyMjIkIdN6n0UEwjwP8UiF8gEAcYvMeUEoHJMN7HK8WmKAeVS
MeVShXvW/z8uxMTkgCn36xOIlMcqBwZLuHDLNQPt68cutYsXuzuB4Ls7UIr3
LrrwgubmSf/TgVJcO9BkrjVwMjGxybMC5WaiSd1XhYUJk+FjNLlOuBSTxg1U
qZVcCDkmlYMHD9ZvhsvNYkIHFhhyVnlKGHLrQT5myrz08v/9Nk0gi8XiAMLC
KUCB8J9g5uM5c+bMxxowAwoAzGhtzQ==
"], {{0, 28}, {28, 0}}, {0, 255},
ColorFunction>GrayLevel],
BoxForm`ImageTag[
"Byte", ColorSpace > Automatic, Interleaving > None],
Selectable>False],
DefaultBaseStyle>"ImageGraphics",
ImageSizeRaw>{28, 28},
PlotRange>{{0, 28}, {0, 28}}]\) > 2, \!\(\*
GraphicsBox[
TagBox[RasterBox[CompressedData["
1:eJxTTMoPSmNiYGAo5gASQYnljkVFiZXBAkBOaF5xZnpeaopnXklqemqRRRJI
mQwU/x8u4FBT01YEb11TJQPLIRiviYmJxaoTAorYWZiYmDj+wOR+bmNnQgGO
25FMfVuS5YqQ4tqHZum3Z2DgzsQk+QK7sxZyM9luwOFkDyamJTiklvExeb/H
LnWKn4n/MHapN15M/CtwmBjGxDQdh9RqASad19ilDvMz8S3CLvXBl4kpHIeJ
AcBgfIVdagsfE9Np7FIHeJiYrHBom8bEZITDjSC5KBxS/88ElbzBJYcNAAB0
/LWr
"], {{0, 28}, {28, 0}}, {0, 255},
ColorFunction>GrayLevel],
BoxForm`ImageTag[
"Byte", ColorSpace > Automatic, Interleaving > None],
Selectable>False],
DefaultBaseStyle>"ImageGraphics",
ImageSizeRaw>{28, 28},
PlotRange>{{0, 28}, {0, 28}}]\) > 7}, "Perplexity"]
Neural nets aren t the only or even always the best way to do machine learning. But one thing that’s new in Version 12.0 is that we re now able to use selfnormalizing networks automatically in >Classify and Predict, so they can easily take advantage of neural nets when it makes sense.
Computing with Images
We introduced ImageIdentify, for identifying what an image is of, back in Version 10.1. In Version 12.0 we ve managed to generalize this, to figure out not only what an image is of, but also what s in an image. So, for example, ImageCases will show us cases of known kinds of objects in an image:
#10005
ImageCases[CloudGet["https://wolfr.am/CMoUVVTH"]]
For more details, ImageContents gives a dataset about what s in an image:
ImageContents[CloudGet["https://wolfr.am/CMoUVVTH"]]
You can tell ImageCases to look for a particular kind of thing:
#10005
ImageCases[CloudGet["https://wolfr.am/CMoUVVTH"], "zebra"]
And you can also just test to see whether an image contains a particular kind of thing:
#10005
ImageContainsQ[CloudGet["https://wolfr.am/CMoUVVTH"], "zebra"]
In a sense, ImageCases is like a generalized version of FindFaces, for finding human faces in an image. Something new in Version 12.0 is that FindFaces and FacialFeatures have become more efficient and robust with FindFaces now based on neural networks rather than classical image processing, and the network for FacialFeatures now being 10 MB rather than 500 MB:
#10005
FacialFeatures[CloudGet["https://wolfr.am/CO20sk12"]] // Dataset
Functions like ImageCases represent newstyle image processing, of a type that didn’t seem conceivable only a few years ago. But while such functions let one do all sorts of new things, there’s still lots of value in more classical techniques. We’ve had fairly complete classical image processing in the Wolfram Language for a long time, but we continue to make incremental enhancements.
An example in Version 12.0 is the ImagePyramid framework, for doing multiscale image processing:
ImagePyramid[CloudGet["https://wolfr.am/CTWBK9Em"]][All]
There are several new functions in Version 12.0 concerned with color computation. A key idea is ColorsNear, which represents a neighborhood in perceptual color space, here around the color Pink:
#10005
ChromaticityPlot3D[ColorsNear[Pink,.2]]
The notion of color neighborhoods can be used, for example, in the new ImageRecolor function:
ImageRecolor[CloudGet["https://wolfr.am/CT2rFF6e"],
ColorsNear[RGBColor[
Rational[1186, 1275],
Rational[871, 1275],
Rational[1016, 1275]], .02] > Orange]
Speech Recognition More with Audio
As I sit at my computer writing this, I ll say something to my computer, and capture it:
Play Audio
Here s a spectrogram of the audio I captured:
#10005
Spectrogram[%]
So far we could do this in Version 11.3 (though Spectrogram got 10 times faster in 12.0). But now here s something new:
#10005
SpeechRecognize[%%]
We re doing speechtotext! We re using stateoftheart neural net technology, but I m amazed at how well it works. It s pretty streamlined, and we re perfectly well able to handle even very long pieces of audio, say stored in files. And on a typical computer the transcription will run at about actual realtime speed, so that an hour of speech will take about an hour to transcribe.
Right now we consider SpeechRecognize experimental, and we ll be continuing to enhance it. But it s interesting to see another major computational task just become a single function in the Wolfram Language.
In Version 12.0, there are other enhancements too. SpeechSynthesize supports new languages and new voices (as listed by VoiceStyleData[]).
There s now WebAudioSearch analogous to WebImageSearch that lets you search for audio on the web:
#10005
WebAudioSearch["rooster"]
You can retrieve actual Audio objects:
#10005
WebAudioSearch["rooster","Samples",MaxItems>3]
Then you can make spectrograms or other measurements:
#10005
Spectrogram /@%
And then new in Version 12.0 you can use AudioIdentify to try to identify the category of sound (is that a talking rooster?):
#10005
AudioIdentify/@%%
We still consider AudioIdentify experimental. It s an interesting start, but it definitely doesn t, for example, work as well as ImageIdentify.
A more successful audio function is PitchRecognize, which tries to recognize the dominant frequency in an audio signal (it uses both classical and neural net methods). It can t yet deal with chords , but it works pretty much perfectly for single notes .
When one deals with audio, one often wants not just to identify what s in the audio, but to annotate it. Version 12.0 introduces the beginning of a largescale audio framework. Right now AudioAnnotate can mark where there s silence, or where there s something loud. In the future, we ll be adding speaker identification and word boundaries, and lots else. And to go along with these, we also have functions like AudioAnnotationLookup, for picking out parts of an audio object that have been annotated in particular ways.
Underneath all this highlevel audio functionality there s a whole infrastructure of lowlevel audio processing. Version 12.0 greatly enhances AudioBlockMap (for applying filters to audio signals), as well as introduces functions like ShortTimeFourier.
A spectrogram can be viewed a bit like a continuous analog of a musical score, in which pitches are plotted as a function of time. In Version 12.0 there s now InverseSpectrogram that goes from an array of spectrogram data to audio. Ever since Version 2 in 1991, we ve had Play to generate sound from a function (like Sin[100 t]). Now with InverseSpectrogram we have a way to go from a frequencytime bitmap to a sound. (And, yes, there are tricky issues about best guesses for phases when one only has magnitude information.)
Natural Language Processing
Starting with WolframAlpha, we ve had exceptionally strong natural language understanding (NLU) capabilities for a long time. And this means that given a piece of natural language, we re good at understanding it as Wolfram Language that we can then go and compute from:
EntityValue[
EntityClass[
"Country", {EntityProperty["Country", "EntityClasses"] >
EntityClass["Country", "Europe"],
EntityProperty["Country", "Population"] > TakeLargest[5]}],
EntityProperty["Country", "Flag"]]
But what about natural language processing (NLP) where we re taking potentially long passages of natural language, and not trying to completely understand them, but instead just find or process particular features of them? Functions like TextSentences, TextStructure, TextCases and WordCounts have given us basic capabilities in this area for a while. But in Version 12.0 by making use of the latest machine learning, as well as our longstanding NLU and knowledgebase capabilities we’ve now jumped to having very strong NLP capabilities.
The centerpiece is the dramatically enhanced version of TextCases. The basic goal of TextCases is to find cases of different types of content in a piece of text. An example of this is the classic NLP task of entity recognition with TextCases here finding what country names appear in the Wikipedia article about ocelots:
#10005
TextCases[WikipediaData["ocelots"],"Country">"Interpretation"]
We could also ask what islands are mentioned, but now we won t ask for a Wolfram Language interpretation:
#10005
TextCases[WikipediaData["ocelots"],"Island"]
TextCases isn t perfect, but it does pretty well:
#10005
TextCases[WikipediaData["ocelots"],"Date"]
It supports a whole lot of different content types too:
You can ask it to find pronouns, or reduced relative clauses, or quantities, or email addresses, or occurrences of any of 150 kinds of entities (like companies or plants or movies). You can also ask it to pick out pieces of text that are in particular human or computer languages, or that are about particular topics (like travel or health), or that have positive or negative sentiment. And you can use constructs like Containing to ask for combinations of these things (like noun phrases that contain the name of a river):
#10005
TextCases[WikipediaData["ocelots"],Containing["NounPhrase","River"]]
TextContents lets you see, for example, details of all the entities that were detected in a particular piece of text:
#10005
TextContents[TextSentences[WikipediaData["ocelots"], 1]]
And, yes, one can in principle use these capabilities through FindTextualAnswer to try to answer questions from text but in a case like this, the results can be pretty wacky:
#10005
FindTextualAnswer[WikipediaData["ocelots"],"weight of an ocelot",5]
Of course, you can get a real answer from our actual builtin curated knowledgebase:
Entity["Species", "Species:LeopardusPardalis"][
EntityProperty["Species", "Weight"]]
By the way, in Version 12.0 we ve added a variety of little natural language convenience functions , like Synonyms and Antonyms:
#10005
Synonyms["magnificent"]
Computational Chemistry
One of the surprise new areas in Version 12.0 is computational chemistry. We ve had data on explicit known chemicals in our knowledgebase for a long time. But in Version 12.0 we can compute with molecules that are specified simply as pure symbolic objects. Here s how we can specify what turns out to be a water molecule:
#10005
Molecule[{Atom["H"],Atom["H"],Atom["O"]},{Bond[{1,3}],Bond[{2,3}]}]
And here s how we can make a 3D rendering:
#10005
MoleculePlot3D[%]
We can deal with known chemicals :
#10005
Molecule[Entity["Chemical", "Caffeine"]]
We can use arbitrary
#10005
%["SourceCode"]
So now we have a thing like this that runs our Butterworth filter, that we can use anywhere:
If we want to check what it s doing, we can always connect it back into the Wolfram Language using DeviceOpen to open its serial port, and read and write from it.
Linking to the Unity Universe
What s the relation between the Wolfram Language and video games? Over the years, the Wolfram Language has been used behind the scenes in many aspects of game development (simulating strategies, creating geometries, analyzing outcomes, etc.). But for some time now we ve been working on a closer link between Wolfram Language and the Unity game environment, and in Version 12.0 we re releasing a first version of this link.
The basic scheme is to have Unity running alongside the Wolfram Language, then to set up twoway communication, allowing both objects and commands to be exchanged. The underthehood plumbing is quite complex, but the result is a nice merger of the strengths of Wolfram Language and Unity.
This sets up the link, then starts a new project in Unity:
#10005
Needs["UnityLink`"]
#10005
UnityOpen["NewProject"]
Now create some complex shape:
#10005
RevolutionPlot3D[{Sin[t] + Sin[5 t]/10, Cos[t] + Cos[5 t]/10}, {t, 0,
Pi}, Sequence[
RegionFunction > (Sin[5 (#4 + #5)] > 0 ), Boxed > False,
Axes > None, PlotTheme > "ThickSurface"]]
Then it takes just one command to put this into the Unity game as an object called "thingoid":
#10005
CreateUnityGameObject["Plot", CloudGet["https://wolfr.am/COrZtVvA"],
Properties > {
"SharedMaterial" > UnityLink`CreateUnityMaterial[Orange]}]
Within the Wolfram Language there s a symbolic representation of the object, and UnityLink now provides hundreds of functions for manipulating such objects, always maintaining versions both in Unity and in the Wolfram Language.
It s very powerful that one can take things from the Wolfram Language and immediately put them into Unity whether they re geometry, images, audio, geo terrain, molecular structures, 3D anatomy, or whatever. It s also very powerful that such things can then be manipulated within the Unity game, either through things like game physics, or by user action. (Eventually, one can expect to have Manipulatelike functionality, in which the controls aren t just sliders and things, but complex pieces of gameplay.)
We ve done experiments with putting Wolfram Language–generated content into virtual reality since the early 1990s. But in modern times Unity has become something of a de facto standard for setting up VR/AR environments and with UnityLink it s now straightforward to routinely put things from Wolfram Language into any modern XR environment.
One can use the Wolfram Language to prepare material for Unity games, but within a Unity game UnityLink also basically lets one just insert Wolfram Language code that can be executed during a game either on a local machine or through an API in the Wolfram Cloud. And, among other things, this makes it straightforward to put hooks into a game so the game can send telemetry (say to the Wolfram Data Drop) for analysis in the Wolfram Language. (It s also possible to script the playing of the game which is, for example, very useful for game testing.)
Writing games is a complex matter. But UnityLink provides an interesting new approach that should make it easier to prototype all sorts of games, and to learn the ideas of game development. One reason for this is that it effectively lets one script a game at a higher level by using symbolic constructs in the Wolfram Language. But another reason is that it lets the development process be done incrementally in a notebook, and explained and documented every step of the way. For example, here s what amounts to a computational essay describing the development of a piano game :
UnityLink isn t a simple thing: it contains more than 600 functions. But with those functions it s possible to access pretty much all the capabilities of Unity, and to set up pretty much any imaginable game.
Simulated Environments for Machine Learning
For something like reinforcement learning it s essential to have a manipulable external environment in the loop when one’s doing machine learning. Well, ServiceExecute lets you call APIs (what s the effect of posting that tweet, or making that trade?), and DeviceExecute lets you actuate actual devices (turn the robot left) and get data from sensors (did the robot fall over?).
But for many purposes what one instead wants is to have a simulated external environment. And in a way, just the pure Wolfram Language already to some extent does that, for example providing access to a rich computational universe full of modifiable programs and equations (cellular automata, differential equations, ). And, yes, the things in that computational universe can be informed by the real world say with the realistic properties of oceans, or chemicals or mountains.
But what about environments that are more like the ones we modern humans typically learn in full of built engineering structures and so on? Conveniently enough, SystemModel gives access to lots of realistic engineering systems. And through UnityLink we can expect to have access to rich gamebased simulations of the world.
But as a first step, in Version 12.0 we re setting up connections to some simple games in particular from the OpenAI gym . The interface is much as it would be for interacting with the real world, with the game accessed like a device (after appropriate sometimes opensourcepainful installation):
#10005
env = DeviceOpen["OpenAIGym", "MontezumaRevengev0"]
We can read the state of the game:
#10005
DeviceRead[env];
And we can show it as an image:
#10005
Image[DeviceRead[env]["ObservedState"]]
With a bit more effort, we can take 100 random actions in the game (always checking that we didn t die ), then show a feature space plot of the observed states of the game:
#10005
FeatureSpacePlot[
Table[If[DeviceRead[env]["Ended"], Return[],
Image[DeviceExecute[env, "Step",
DeviceExecute[env, "RandomAction"]]["ObservedState"]]], 100]]
Blockchain (and CryptoKitty) Computation
In Version 11.3 we began our first connection to the blockchain. Version 12.0 adds a lot of new features and capabilities, perhaps most notably the ability to write to public blockchains, as well as read from them. (We also have our own Wolfram Blockchain for Wolfram Cloud users.) We’re currently supporting Bitcoin, Ethereum and ARK blockchains, both their mainnets and testnets (and, yes, we have our own nodes connecting directly to these blockchains).
In Version 11.3 we allowed raw reading of transactions from blockchains. In Version 12.0 we ve added a layer of analysis, so that, for example, you can ask for a summary of CK tokens (AKA CryptoKitties) on the Ethereum blockchain:
#10005
BlockchainTokenData["CK"]
It s quick to look at all token transactions in history, and make a word cloud of how active different tokens have been:
#10005
WordCloud[SortBy[BlockchainTokenData[All,{"Name","TransfersCount"}],Last]]
But what about doing our own transaction? Let s say we want to use a Bitcoin ATM (like the one that, bizarrely, exists at a bagel store near me) to transfer cash to a Bitcoin address. Well, first we create our crypto keys (and we need to make sure we remember our private key!):
#10005
keys=GenerateAsymmetricKeyPair["Bitcoin"]
Next, we have to take our public key and generate a Bitcoin address from it:
#10005
BlockchainKeyEncode[keys["PublicKey"],"Address",BlockchainBase>"Bitcoin"]
Make a QR code from that and you re ready to go to the ATM:
#10005
BarcodeImage["%,"QR"]
But what if we want to write to the blockchain ourselves? Here we ll use the Bitcoin testnet (so we re not spending real money). This shows an output from a transaction we did before that includes 0.0002 bitcoin (i.e. 20,000 satoshi):
#10005
$BlockchainBase={"Bitcoin", "Testnet"};
#10005
First[BlockchainTransactionData["17a422eebfbf9cdee19b600740597bafea45cc4c703c67afcc8fb889f4cf7f28","Outputs"]]
Now we can set up a transaction which takes this output, and, for example, sends 8000 satoshi to each of two addresses (that we defined just like for the ATM transaction):
#10005
BlockchainTransaction[ {
"17a422eebfbf9cdee19b600740597bafea45cc4c703c67afcc8fb889f4cf7f28", "Index" > 0>},
"Outputs" > { Quantity[8000, "Satoshi"],
"Address" > "munDTMqa9V9Uhi3P21FpkY8UfYzvQqpmoQ">, Quantity[8000, "Satoshi"],
"Address" > "mo9QWLSJ1g1ENrTkhK9SSyw7cYJfJLU8QH">},
"BlockchainBase" > {"Bitcoin", "Testnet"}>]
OK, so now we ve got a blockchain transaction object that would offer a fee (shown in red because it s actual money you ll spend) of all the leftover cryptocurrency (here 4000 satoshi) to a miner willing to put the transaction in the blockchain. But before we can submit this transaction (and spend the money ) we have to sign it with our private key:
#10005
BlockchainTransactionSign[%, keys["PrivateKey"]]
Finally, we just apply BlockchainTransactionSubmit and we ve submitted our transaction to be put on the blockchain:
#10005
BlockchainTransactionSubmit[%]
Here s its transaction ID:
#10005
txid=%["TransactionID"]
If we immediately ask about this transaction, we ll get a message saying it isn t in the blockchain:
#10005
BlockchainTransactionData[txid]
But after we wait a few minutes, there it is and it ll soon spread to every copy of the Bitcoin testnet blockchain:
#10005
BlockchainTransactionData[txid]
If you re prepared to spend real money, you can use exactly the same functions to do a transaction on a main net. You can also do things like buy CryptoKitties. Functions like BlockchainContractValue can be used for any (for now, only Ethereum) smart contract, and are set up to immediately understand things like ERC20 and ERC721 tokens.
And Ordinary Crypto as Well
Dealing with blockchains involves lots of cryptography, some of which is new in Version 12.0 (notably, handling elliptic curves). But in Version 12.0 we re also extending our nonblockchain cryptographic functions. For example, we’ve now got functions for directly dealing with digital signatures. This creates a digital signature using the private key from above:
#10005
message="This is my genuine message";
#10005
signature=GenerateDigitalSignature[message,keys["PrivateKey"]]
Now anyone can verify the message using the corresponding public key:
#10005
VerifyDigitalSignature[{message,signature},keys["PublicKey"]]
In Version 12.0, we added several new types of hashes for the Hash function, particularly to support various cryptocurrencies. We also added ways to generate and verify derived keys. Start from any password, and GenerateDerivedKey will puff it out to something longer (to be more secure you should add “salt”):
#10005
GenerateDerivedKey["meow"]
Here s a version of the derived key, suitable for use in various authentication schemes:
#10005
GenerateDerivedKey["meow"]["PHCString"]
Connecting to Financial Data Feeds
The Wolfram Knowledgebase contains all sorts of financial data. Typically there s a financial entity (like a stock), then there s a property (like price). Here s the complete daily history of Apple s stock price (it s very impressive that it looks best on a log scale):
#10005
DateListLogPlot[Entity["Financial", "NASDAQ:AAPL"][Dated["Price",All]]]
But while the financial data in the Wolfram Knowledgebase, and standardly available in the Wolfram Language, is continuously updated, it s not real time (mostly it s 15minute delayed), and it doesn t have all the detail that many financial traders use. For serious finance use, however, we ve developed Wolfram Finance Platform. And now, in Version 12.0, it s got direct access to Bloomberg and Reuters financial data feeds.
The way we architect the Wolfram Language, the framework for the connections to Bloomberg and Reuters is always available in the language but it s only activated if you have Wolfram Finance Platform, as well as the appropriate Bloomberg or Reuters subscriptions. But assuming you have these, here s what it looks like to connect to the Bloomberg Terminal service:
#10005
ServiceConnect["BloombergTerminal"]
All the financial instruments handled by the Bloomberg Terminal now become available as entities in the Wolfram Language:
#10005
Entity["BloombergTerminal","AAPL US Equity"]
Now we can ask for properties of this entity:
#10005
Entity["BloombergTerminal","AAPL US Equity"]["PX_LAST"]
Altogether there are more than 60,000 properties accessible from the Bloomberg Terminal:
#10005
Length[EntityProperties["BloombergTerminal"]]
Here are 5 random examples (yes, they re pretty detailed; those are Bloomberg names, not ours):
#10005
RandomSample[EntityProperties["BloombergTerminal"],5]
We support the Bloomberg Terminal service, the Bloomberg Data License service, and the Reuters Elektron service. One sophisticated thing one can now do is to set up a continuous task to asynchronously receive data, and call a handler function every time a new piece of data comes in:
#10005
ServiceSubmit[
ContinuousTask[ServiceRequest[ServiceConnect["Reuters"],
"MarketData",{"Instrument"> "AAPL.O","TriggerFields">{"BID","ASK"}}]],
HandlerFunctions>(action[#Result] )>]
Software Engineering Platform Updates
I ve talked about lots of new functions and new functionality in the Wolfram Language. But what about the underlying infrastructure of the Wolfram Language? Well, we ve been working hard on that too. For example, between Version 11.3 and Version 12.0 we ve managed to fix nearly 8000 reported bugs. We ve also made lots of things faster and more robust. And in general we ve been tightening the software engineering of the system, for example reducing the initial download size by nearly 10% (despite all the functionality that s been added). (We ve also done things like improve the predictive prefetching of knowledgebase elements from the cloud so when you need similar data it s more likely to be already cached on your computer.)
It s a longstanding feature of the computing landscape that operating systems are continually getting updated and to take advantage of their latest features, applications have to get updated too. We ve been working for several years on a major update to our Mac notebook interface which is finally ready in Version 12.0. As part of the update, we ve rewritten and restructured large amounts of code that have been developed and polished over more than 20 years, but the result is that in Version 12.0, everything about our system on the Mac is fully 64bit, and makes use of the latest Cocoa APIs. This means that the notebook front end is significantly faster and can also go beyond the previous 2 GB memory limit.
There s also a platform update on Linux, where now the notebook interface fully supports Qt 5, which allows all rendering operations to take place headlessly , without any X server greatly streamlining deployment of the Wolfram Engine in the cloud. (Version 12.0 doesn t yet have highdpi support for Windows, but that s coming very soon.)
The development of the Wolfram Cloud is in some ways separate from the development of the Wolfram Language, and Wolfram Desktop applications (though for internal compatibility we re releasing Version 12.0 at the same time in both environments). But in the past year since Version 11.3 was released, there s been dramatic progress in the Wolfram Cloud.
Especially notable are the advances in cloud notebooks supporting more interface elements (including some, like embedded websites and videos, that aren t even yet available in desktop notebooks), as well as greatly increased robustness and speed. (Making our whole notebook interface work in a web browser is no small feat of software engineering, and in Version 12.0 there are some pretty sophisticated strategies for things like maintaining consistent fasttoload caches, along with full symbolic DOM representations.)
In Version 12.0 there s now just a simple menu item (File > Publish to Cloud ) to publish any notebook to the cloud. And once the notebook is published, anyone in the world can interact with it as well as make their own copy so they can edit it.
It s interesting to see how broadly the cloud has entered what can be done in the Wolfram Language. In addition to all the seamless integration of the cloud knowledgebase, and the ability to reach out to things like blockchains, there are also conveniences like Send To sending any notebook through email, using the cloud if there s no direct email server connection available.
And a Lot Else
Even though this has been a long piece, it s not even close to telling the whole story of what’s new in Version 12.0. Along with the rest of our team, I ve been working very hard on Version 12.0 for a long time now but it s still exciting to see just how much is actually in it.
But what s critical (and a lot of work to achieve!) is that everything we ve added is carefully designed to fit coherently with what s already there. From the very first version more than 30 years ago of what s now the Wolfram Language, we ve been following the same core principles and this is part of what s allowed us to so dramatically grow the system while maintaining longterm compatibility.
It s always difficult to decide exactly what to prioritize developing for each new version, but I m very pleased with the choices we made for Version 12.0. I ve given many talks over the past year, and I ve been very struck with how often I ve been able to say about things that come up: Well, it so happens that that s going to be part of Version 12.0!
I ve personally been using internal preliminary builds of Version 12.0 for nearly a year, and I ve come to take for granted many of its new capabilities and to use and enjoy them a lot. So it s a great pleasure that today we have the final Version 12.0 with all these new capabilities officially in it, ready to be used by anyone and everyone
To comment, please visit the copy of this post at the Stephen Wolfram Blog »  ↑ 
2. Fishackathon: Protecting Marine Life with AI and the Wolfram Language×ò., 11 àïð.[−]
Fishackathon
Every year, the U.S. Department of State sponsors a worldwide competition called Fishackathon. Its goal is to protect life in our waters by creating technological solutions to help solve problems related to fishing.
The first global competition was held in 2014 and has been growing massively every year. In 2018 the winning entry came from a fiveperson team from Boston, after competing against 45,000 people in 65 other cities spread across 5 continents. The participants comprised programmers, web and graphic designers, oceanographers and biologists, mathematicians, engineers and students who all worked tirelessly over the course of two days.
To find out more about the winning entry for Fishackathon in 2018 and how the Wolfram Language has helped make the seas safer, we sat down with Michael Sollami to learn more about him and his team’s solution to that year’s challenge.
Tell us a little bit about yourself and how you became familiar with Wolfram tech
I became involved with Wolfram technologies during college and graduate school. I used Mathematica extensively for research in mathematics and computer science, and I ended up working with Stephen Wolfram in his Cambridgebased advanced research group. Today, I’m Lead Data Scientist at Salesforce Einstein, where I work with an amazing team of engineers and researchers to build new machine learning systems to enhance our prediction and search platforms. From experimental deep neural architectures to prototyping evermoreefficient information retrieval methods, we are designing nextgeneration search, recommendation and predictive analytics technologies.
What drew you to Fishackathon?
Growing up near the Long Island Sound, where my family had a small sailboat, we spent a lot of time on or near the water. I love scuba diving (random trivia: Saba is my favorite dive site), and over the years I have seen firsthand the unnerving levels of aquatic devastation. The data coming from oceanographic research paints quite a bleak future. In fact, according to the best oceanographic computer models, there won’t be any fish larger than minnows by the 2040s.
In just 50 years, we’ve reduced the populations of large fish, such as bluefin tuna and cod, by over 90 percent. Industrial fishing uses nets that are 20 miles long, and trawlers drag something the size of a tractor trailer along the ocean floor. In just a few short decades, we clearcut our seabed floors, once vital nurseries of sponges and corals, into millions of square miles of lifeless mud. According to coral reef ecologist Jeremy Jackson, “The total area of underwater habitat destruction is larger than the sum of all forests that have ever been cut down in the history of humanity.”
Many factors contribute to the threat of sealife extinction, and they are not independent: biological and chemical pollution, acidification, deoxygenation, plastics, the climate crisis—it can be overwhelming. So we thought illegal, unreported and unregulated (IUU) overfishing was a good place to brainstorm possible solutions. It was incredibly exciting to compete simultaneously with over 3,500 other people around the world, and in the space of a weekend produce code with the potential for positive environmental impact.
What was last year s challenge?
Each year Fishackathon offers multiple challenge statements that competing teams can chose from. The topics are developed by industry professionals with realworld needs and made available to participants nearer to the time of the event. In 2018, we chose challenge #10: passive illegal fishing detection.
Challenge Statement: Protecting restricted fishing zones (e.g. marine reserves, remote areas) from illegal fishing is a huge challenge. A passive tool (maybe using sonar?) that helps identify fishing activity in restricted areas would help agencies monitor, track and enforce laws more effectively.
You might recall from watching The Hunt for Red October that the United States Navy operates a chain of underwater listening posts located around the world, a sound surveillance system called SOSUS. This collection of bottommounted hydrophone arrays was originally constructed in the 1950s to track and fingerprint Soviet submarines by their acoustic signals. Almost 70 years later, there is no reason why we can’t apply (newer and cheaper versions of) this technology to track and identify fishing vessels.
What was the process of developing your solution?
We knew that the heart of our solution was an accurate fishing acoustics model, so we started with the software side of the problem. Once we had proof of life in the core detection algorithm, we iterated on designing the web app/interface and hardware components.
We need to design the submersible listening device to be outfitted with a hydrophone while making it as inexpensive as possible. The deployed devices would also need to be able to transmit network data back to our servers, for triangulating and tracking the positions of any detected poaching vessels operating in protected waters.
Our eventual submission, which we called PoachStopper, would not only recognize sounds associated with fishing, but also compute a unique signature for each boat that passes within a detection radius of 50 kilometers.
How did you incorporate deep learning into the solution?
This is where Mathematica shined. Mathematica s ability to import and manipulate audio files made the data preprocessing pipeline dead simple. NetEncoders for "Audio" that handle varying dimensionalities made preprocessing inputs much easier than in many other frameworks at the time.
net=ResourceData["HiddenAudioIdentifyMobileNetDepth10"];
ne=NetExtract[net,"Input"];
encoder=NetEncoder[{"Function",Function[feat,First@Partition[If[Length[feat]"Fixed"],feat],96,64]]/@Normal[NetEncoder[{"AudioMelSpectrogram","NumberOfFilters">64,"WindowSize">600}][#1]] ,{96,64},"Pattern">None,"Batched">True}];
Image3D[ne@Normal@ExampleData[#],ImageSize>Tiny,ColorFunction>"RainbowOpacity"] /@ExampleData["Audio"][[;;20]]//Multicolumn
At the time Fishackathon was held, the Neural Net Repository had yet to exist, so I couldn’t just pull some stateoftheart model and perform some transfer learning. So we needed to train the network from scratch with the additional requirement of maintaining a verylowenergy consumption profile. For these reasons, I decided to to start working with a variant of Google’s MobileNet architecture, which features depthwise separable convolutions as efficient building blocks. To the basic skeleton I added some linear bottlenecks between the layers with skip connections, and which led to higher accuracies and accelerated convergence. After processing the raw audio of our ocean sounds dataset into normalized spectrograms, I trained on AWS overnight on a GPU instance, and by Sunday morning had a working detector.
This was before Version 11.3 came out last year, but it would have been nice to use tools like NetEncoder["AudioMFCC"] and WebAudioSearch. The network we designed could delineate with 98% accuracy the difference between the normal sounds of the ocean, nonfishing boats and actively fishing trawlers operating in their different modes. We also built a separate recognition module that learned a nicely invariant mapping between the sounds of individual vessels’ engines and propellers—essentially a perceptual hash—for tracking specific vessels.
Using the Raspberry Pi seems like a good choice for inexpensive deployment
Indeed! Thanks to Mathematica being packaged with the Raspbian OS, that could be a good route for people to take. Raspberry Pis can come with GPUs—however, they aren’t from NVIDIA, and so are not supported for TargetDevice "GPU". For production, we ended up porting the network into PyTorch, but I eventually ported the model to TensorFlow Mobile for testing with IoT and mobile devices, which was better from a realtime and powerusage profile.
What s next for PoachStopper?
After we won the finals, I handed the project off to my very capable teammates, who are pursuing various partnerships and funding opportunities. PoachStopper and other ecostartups like it have the potential to make very measurable impacts in the fight for the future of oceanic life. However, the hard truth is that governments are the only entities that can prevent the end of fish. And it is easy to do, with just a few simple legislative steps:
Create sufficiently large unfishable areas for populations to begin regenerating
Impose quotas on the amount of fish caught in any given year
End government subsidies for the fishingindustrial complex
Companies need to pay for the privilege of fishing, and governments need to ensure that our oceans will not become a vast toxic desert, as we predict they will by 2050.
What s in store for this year s Fishackathon?
This year’s Fishackathon has a large focus on Fishcoin, and will focus on developing solutions for data capture and sharing using open platforms. This essentially means building apps that can capture and process sensor data in the field and then publishing it with decentralized ledger technology. With all its new blockchainrelated features, Mathematica could be the weapon of choice in 2019!
Anything you d like to add?
I consider Mathematica to be a “killer app” for any hackathon. Python and machine learning frameworks are my primary tools for machine learning engineering, but Mathematica remains the fastest language to prototype smallscale things. With Version 12 on the horizon, I’m very excited for the ability to compile everything down to machine code for running at C++ speed—not to mention the ability to export networks to ONNX for use in productiongrade servers.
The latest Wolfram technology stack makes it possible for you to develop and deploy useful applications in minutes. Start coding today with a WolframOne trial.
Try now!
 ↑ 
3. Wolfram and the Raspberry Pi Foundation Collaborate on Free Access to Educational Project MaterialsÂò., 09 àïð.[−] Wolfram Research is pleased to announce further collaboration with the Raspberry Pi Foundation as part of supporting makers across the world through education. A collection of 10 Wolfram Language projects has been launched on the foundation’s projects site. These projects range from creating weather dashboards to building machine learning classifiers to using AI for facial recognition. The goal is to put the power of computational intelligence into the hands of anyone who wants access—democratizing the skills that will increasingly be needed to innovate and discover what is possible with modern computation.
By providing easytofollow, stepbystep tutorials that result in a finished, functioning piece of software, Wolfram aims to lower the barrier of entry for those who wish to get immediately started programming, building and making. Projects can be completely built on the Raspberry Pi or within a web browser in the Wolfram Cloud.
Building the Computational Future
Since 2013, the Wolfram Language and Mathematica have been freely available on the Raspberry Pi system as part of NOOBS. Stephen Wolfram wrote in his announcement of the collaboration with the Raspberry Pi Foundation, “I’m a great believer in the importance of programming as a central component of education.” And over five years later, there is indeed increasing demand in the labor force for technical programming skills—part of why Wolfram continues to push computational thinking as a primary means, method and framework for preparing individuals for success in the future of work.
The Wolfram Language is particularly well suited for this mission, as its highlevel symbolic nature and linguistic capabilities not only tell machines precisely what to do, but can also be easily read by nontechnical people—the world’s first and only true computational communication language understandable by both humans and AI.
“These projects provide a fantastic opportunity for code clubs around the world to step into the power of using the Wolfram Language to springboard their computational thinking skills’ development,” says Jon McLoone, cofounder of ComputerBased Math™ (CBM).
The first of these project materials will be available this week, with more planned throughout the year. It will be interesting and exciting to see what people build with the Wolfram Language and Raspberry Pi.  ↑ 
4. Drawing on Autopilot: Automated Plane (Geometry) Illustrations from The American Mathematical Monthly×ò., 04 àïð.[−]
Version 12 of the Wolfram Language introduces the functions GeometricScene, RandomInstance and FindGeometricConjectures for representing, drawing and reasoning about problems in plane geometry. In particular, abstract scene descriptions can be automatically supplied with coordinate values to produce diagrams satisfying the conditions of the scene. Let’s apply this functionality to some of the articles and problems about geometry appearing in the issues of The American Mathematical Monthly from February and March of 2019.
Solving Newton s Equation Geometrically
First consider the article “Newton Quadrilaterals, the Associated Cubic Equations, and Their Rational Solutions,” by Mowaffaq Hajja and Jonathan Sondow, appearing in the February 2019 issue.
Newton posed the following problem in his 1720 algebra textbook Universal Arithmetick: given a quadrilateral with side lengths , , and inscribed in a circle of diameter , solve for given , and . His solution was Newton’s equation:
#10005
newtonEq=(d^3(a^2+b^2+c^2)d2a b c==0)?(d>0);
Let’s solve Newton’s equation for the following random values of , and :
#10005
abcRules=Thread[{a,b,c}> RandomReal[10,3]]
We could use Solve to find directly:
#10005
Solve[newtonEq/.abcRules,d]
We could also employ RandomInstance and GeometricScene to solve for using the original geometric construction. First we draw the scene, only using the values of , and (the symbol appears, but is not assigned a value initially; the first argument of GeometricScene contains the list of symbolic points and, optionally, the list of symbolic quantities, each of which can be given a fixed value via a rule assignment, if desired):
#10005
newtonScene=RandomInstance[GeometricScene[
{{"A","B","C","D"},Append[abcRules,d]},
{
Polygon[{"A","B","C","D"}],
CircleThrough[{"A","B","C","D"},Midpoint[{"A","D"}]],
EuclideanDistance["A","B"]==a,
EuclideanDistance["B","C"]==b,
EuclideanDistance["C","D"]==c,
EuclideanDistance["D","A"]==d
}
]]
Now we extract the value of in this scene, and see that it equals our solution found directly:
#10005
Replace[d,newtonScene["Quantities"]]
The presentday authors prove the converse of Newton’s original statement: given positive numbers , , and satisfying Newton’s equation, there exists a quadrilateral with side lengths , , and inscribed in a circle of diameter .
We find an instance of such values for , , and :
#10005
abcdRules=First@FindInstance[{a,b,c}==RandomReal[10,3]?newtonEq,{a,b,c,d},Reals]
Indeed, we can draw the scene:
#10005
newtonScene=RandomInstance[GeometricScene[
{{"A","B","C","D"},abcdRules},
{
Polygon[{"A","B","C","D"}],
CircleThrough[{"A","B","C","D"},Midpoint[{"A","D"}]],
EuclideanDistance["A","B"]==a,
EuclideanDistance["B","C"]==b,
EuclideanDistance["C","D"]==c,
EuclideanDistance["D","A"]==d
}
]]
Illustrating a Geometric Problem and Conjecturing Its Conclusion
Next we consider Problem 12092 from the February 2019 Problems and Solutions section, proposed by Michael Diao and Andrew Wu.
Let be a triangle, and let be a point in the plane of the triangle satisfying . Let and be diametrically opposite on the circumcircles of and , respectively. Let be the point of concurrency of lines and . Prove that and are perpendicular.
Illustrate the hypotheses:
#10005
pic=RandomInstance[GeometricScene[
{a,b,c,p,q,r,x},
{
p?Triangle[{a,b,c}],
PlanarAngle[{b,a,p}]==PlanarAngle[{c,a,p}],
TriangleCenter[Triangle[{a,b,p}],"Circumcenter"]==Midpoint[{p,q}],
TriangleCenter[Triangle[{a,c,p}],"Circumcenter"]==Midpoint[{p,r}],
GeometricAssertion[{Line[{b,r}],Line[{c,q}]},{"Concurrent",x}],
Style[{Line[{x,p}],Line[{b,c}]},Red]
}
]]
Use FindGeometricConjectures to find facts about this particular scene instance, including the conclusion to our problem:
#10005
FindGeometricConjectures[pic,GeometricAssertion[_,"Perpendicular"]]
Finding Evidence in Support of Geometric Inequalities
Finally, we consider Problem 12098 from the March 2019 Problems and Solutions section, proposed by Leonard Giugiuc and Kadir Altintas.
Suppose that the centroid of a triangle with semiperimeter and inradius lies on its incircle. Prove , and determine conditions for equality.
Generate three separate instances of the scene:
#10005
pics=RandomInstance[GeometricScene[
{{"A","B","C","D"},{s,r}},
{
tri==Triangle[{"A","B","C"}],
TriangleMeasurement[tri,"Semiperimeter"]==s,
TriangleMeasurement[tri,"Inradius"]==r,
"D"==TriangleCenter[tri,"Centroid"],
"D"?TriangleConstruct[tri,"Incircle"]
}
],3]
Verify that the inequality holds in each instance:
#10005
Grid[ReplaceAll[{s,r,s>=3Sqrt[6]r},Prepend[Through[pics["Quantities"]],{}]],Frame>All]
Verify that the inequality holds in general for triangles having side lengths , using the formulas for semiperimeter , inradius and distance from incenter to centroid :
#10005
Module[{s,r,d},
s=(a+b+c)/2;
r=Sqrt[(sa)(sb)(sc)/s];
d=Sqrt[(a^3+b^3+c^32(a^2 (b+c)+b^2 (a+c)+c^2 (a+b))+9a b c)/(9(a+b+c))];
Minimize[{s/r,And[r==d,0Last}
]
Visualize these triangles:
#10005
Graphics[Join[{Opacity[.4],RandomColor[],Triangle[{{1,0},{1,0},#}]} /@xyVals,{Line[{{1,0},{1,0}}]},Point/@xyVals]]
Verify that the triangles satisfying the equality are all isosceles triangles, proving our claim in general:
#10005
#*2/First[#] /@(Sort[ArcLength/@TriangleConstruct[{{1,0},{1,0},#},{"OppositeSide",All}]] /@xyVals)
Hence we have demonstrated that the inequality holds in general, with equality for a single class of similar triangles. Isosceleasypeasy!
Download this post as a Wolfram Notebook.
Mathematica 12 significantly extends the reach of Mathematica and introduces many innovations that give all Mathematica users new levels of power and effectiveness.
Buy now!
 ↑ 
5. Why Wolfram Tech Isn’t Open Source—A Dozen ReasonsÂò., 02 àïð.[−]
Over the years, I have been asked many times about my opinions on free and opensource software. Sometimes the questions are driven by comparison to some promising or newly fashionable opensource project, sometimes by comparison to a stagnating opensource project and sometimes by the belief that Wolfram technology would be better if it were open source.
At the risk of provoking the fundamentalist end of the opensource community, I thought I would share some of my views in this blog. While there are counterexamples to most of what I have to say, not every point applies to every project, and I am somewhat glossing over the different kinds of “free” and “open,” I hope I have crystallized some key points.
Much of this blog could be summed up with two answers: (1) free, opensource software can be very good, but it isn’t good at doing what we are trying to do; with a large fraction of the reason being (2) open source distributes design over small, selfassembling groups who individually tackle parts of an overall task, but largescale, unified design needs centralized control and sustained effort.
I came up with 12 reasons why I think that it would not have been possible to create the Wolfram technology stack using a free and opensource model. I would be interested to hear your views in the comments section below the blog.
1. A coherent vision requires centralized design
FOSS (free and opensource software) development can work well when design problems can be distributed to independent teams who selforganize around separate aspects of a bigger challenge. If computation were just about building a big collection of algorithms, then this might be a successful approach.
But Wolfram’s vision for computation is much more profound—to unify and automate computation across computational fields, application areas, user types, interfaces and deployments. To achieve this requires centralized design of all aspects of technology—how computations fit together, as well as how they work. It requires knowing how computations can leverage other computations and perhaps most importantly, having a longterm vision for future capabilities that they will make possible in subsequent releases.
You can get a glimpse of how much is involved by sampling the 300+ hours of livestreamed Wolfram design review meetings.
Practical benefits of this include:
 The very concept of unified computation has been largely led by Wolfram.
 High backward and forward compatibility as computation extends to new domains.
 Consistent across different kinds of computation (one syntax, consistent documentation, common data types that work across many functions, etc.).
2. Highlevel languages need more design than lowlevel languages
The core team for opensource language design is usually very small and therefore tends to focus on a minimal set of lowlevel language constructs to support the language’s key concepts. Higherlevel concepts are then delegated to the competing developers of libraries, who design independently of each other or the core language team.
Wolfram’s vision of a computational language is the opposite of this approach. We believe in a language that focuses on delivering the full set of standardized highlevel constructs that allows you to express ideas to the computer more quickly, with less code, in a literate, humanreadable way. Only centralized design and centralized control can achieve this in a coherent and consistent way.
Practical benefits of this include:
 One language to learn for all coding domains (computation, data science, interface building, system integration, reporting, process control, etc.)—enabling integrated workflows for which these are converging.
 Code that is on average seven times shorter than Python, six times shorter than Java, three times shorter than R.
 Code that is readable by both humans and machines.
 Minimal dependencies (no collections of competing libraries from different sources with independent and shifting compatibility).
3. You need multidisciplinary teams to unify disparate fields
Selfassembling development teams tend to rally around a single topic and so tend to come from the same community. As a result, one sees many opensource tools tackle only a single computational domain. You see statistics packages, machine learning libraries, image processing libraries—and the only opensource attempts to unify domains are limited to pulling together collections of these singledomain libraries and adding a veneer of connectivity. Unifying different fields takes more than this.
Because Wolfram is large and diverse enough to bring together people from many different fields, it can take on the centralized design challenge of finding the common tasks, workflows and computations of those different fields. Centralized decision making can target new domains and professionally recruit the necessary domain experts, rather than relying on them to identify the opportunity for themselves and volunteer their time to a project that has not yet touched their field.
Practical benefits of this include:
 Provides a common language across domains including statistics, optimization, graph theory, machine learning, time series, geometry, modeling and many more.
 Provides a common language for engineers, data scientists, physicists, financial engineers and many more.
 Tasks that cross different data and computational domains are no harder than domainspecific tasks.
 Engaged with emergent fields such as blockchain.
4. Hard cases and boring stuff need to get done too
Much of the perceived success of opensource development comes from its access to “volunteer developers.” But volunteers tend to be drawn to the fun parts of projects—building new features that they personally want or that they perceive others need. While this often starts off well and can quickly generate proofofconcept tools, good software has a long tail of less glamorous work that also needs to be done. This includes testing, debugging, writing documentation (both developer and user), relentlessly refining user interfaces and workflows, porting to a multiplicity of platforms and optimizing across them. Even when the work is done, there is a longterm liability in fixing and optimizing code that breaks as dependencies such as the operating system change over many years.
While it would not be impossible for a FOSS project to do these things well, the commercially funded approach of having paid employees directed to deliver good enduser experience does, over the long term, a consistently better job on this “final mile” of usability than relying on goodwill.
Some practical benefits of this include:
 Tens of thousands of pages of consistently and highly organized documentation with over 100,000 examples.
 The most unified notebook interface in the world, unifying exploration, code development, presentation and deployment workflows in a consistent way.
 Writeonce deployment over many platforms both locally and in the cloud.
5. Crowdsourced decisions can be bad for you
While bad leadership is always bad, good leadership is typically better than compromises made in committees.
Your choice of computational tool is a serious investment. You will spend a lot of time learning the tool, and much of your future work will be built on top of it, as well as having to pay any license fees. In practice, it is likely to be a longterm decision, so it is important that you have confidence in the technology’s future.
Because opensource projects are directed by their contributors, there is a risk of hijacking by interest groups whose view of the future is not aligned with yours. The theoretical safety net of access to source code can compound the problem by producing multiple forks of projects, so that it becomes harder to share your work as communities are divided between competing versions.
While the commercial model does not guarantee protection from this issue, it does guarantee a single authoritative version of technology and it does motivate management to be led by decisions that benefit the majority of its users over the needs of specialist interests.
In practice, if you look at Wolfram Research’s history, you will see:
 Ongoing development effort across all aspects of the Wolfram technology stack.
 Consistency of design and compatibility of code and documents over 30 years.
 Consistency of prices and commercial policy over 30 years.
6. Our developers work for you, not just themselves
Many opensource tools are available as a side effect of their developers’ needs or interests. Tools are often created to solve a developer’s problem and are then made available to others, or researchers apply for grants to explore their own area of research and code is made available as part of academic publication. Figuring out how other people want to use tools and creating workflows that are broadly useful is one of those longtail development problems that open source typically leaves to the user to solve.
Commercial funding models reverse this motivation. Unless we consider the widest range of workflows, spend time supporting them and ensure that algorithms solve the widest range of inputs, not just the original motivating ones, people like you will not pay for the software. Only by listening to both the developers’ expert input and the commercial teams’ understanding of their customers’ needs and feedback is it possible to design and implement tools that are useful to the widest range of users and create a product that is most likely to sell well. We don’t always get it right, but we are always trying to make the tool that we think will benefit the most people, and is therefore the most likely to help you.
Practical benefits include:
7. Unified computation requires unified design
Complete integration of computation over a broad set of algorithms creates significantly more design than simply implementing a collection of independent algorithms.
Design coherence is important for enabling different computations to work together without making the end user responsible for converting data types, mapping functional interfaces or rethinking concepts by having to write potentially complex bridging code. Only design that transcends a specific computational field and the details of computational mechanics makes accessible the power of the computations for new applications.
The typical unmanaged, singledomain, opensource contributors will not easily bring this kind of unification, however knowledgeable they are within their domain.
Practical benefits of this include:
 Avoids costs of switching between systems and specifications (having to write excessive glue code to join different libraries with different designs).
 Immediate access to unanticipated functions without stopping to hunt for libraries.
 Wolfram developers can get the same benefits of unification as they create more sophisticated implementations of new functionality by building on existing capabilities.
 The Wolfram Language’s taskoriented design allows your code to benefit from new algorithms without having to rewrite it.
8. Unified representation requires unified design
Computation isn’t the only thing that Wolfram is trying to unify. To create productive tools, it is necessary to unify the representation of disparate elements involved in a computational workflow: many types of rich data, documents, interactivity, visualizations, programs, deployments and more. A truly unified computational representation enables abstraction above each of these individual elements, enabling new levels of conceptualization of solutions as well as implementing more traditional approaches.
The opensource model of bringing separately conceived, independently implemented projects together is the antithesis of this approach—either because developers design representations around a specific application that are not rich enough to be applied in other applications, or if they are widely applicable, they only tackle a narrow slice of the workflow.
Often the consequence is that data interchange is done in the lowest common format, such as numerical or textual arrays—often the native types of the underlying language. Associated knowledge is discarded; for example, that the data represents a graph, or that the values are in specific units, or that text labels represent geographic locations, etc. The management of that discarded knowledge, the coercion between types and the preparation for computation must be repeatedly managed by the user each time they apply a different kind of computation or bring a new opensource tool into their toolset.
Practical examples of this include:
 The Wolfram Language can use the same operations to create or transform many types of data, documents, interfaces and even itself.
 Wolfram machine learning tools automatically accept text, sounds, images and numeric and categorical data.
 As well as doing geometry calculations, the geometric representations in the Wolfram Language can be used to constrain optimizations, define regions of integration, control the envelope of visualizations, set the boundary values for PDE solvers, create Unity game objects and generate 3D prints.
9. Open source doesn’t bring major tech innovation to market
FOSS development tends to react to immediate user needs—specific functionality, existing workflows or emulation of existing closedsource software. Major innovations require anticipating needs that users do not know they have and addressing them with solutions that are not constrained by an individual’s experience.
As well as having a vision beyond incremental improvements and narrowly focused goals, innovation requires persistence to repeatedly invent, refine and fail until successful new ideas emerge and are developed to mass usefulness. Open source does not generally support this persistence over enough different contributors to achieve big, marketready innovation. This is why most large opensource projects are commercial projects, started as commercial projects or follow and sometimes replicate successful commercial projects.
While the commercial model certainly does not guarantee innovation, steady revenue streams are required to fund the longterm effort needed to bring innovation all the way to product worthiness. Wolfram has produced key innovations over 30 years, not least having led the concept of computation as a single unified field.
Open source often does create ecosystems that encourage many smallscale innovations, but while bolder innovations do widely exist at the early experimental stages, they often fail to be refined to the point of usefulness in largescale adoption. And opensource projects have been very innovative at finding new business models to replace the traditional, paidproduct model.
Other examples of Wolfram innovation include:
 Wolfram invented the computational notebook, which has been partially mirrored by Jupyter and others.
 Wolfram invented the concept of automated creation of interactive components in notebooks with its Manipulate function (also now emulated by others).
 Wolfram develops automatic algorithm selection for all taskoriented superfunctions ( Predict, Classify, NDSolve, Integrate, NMinimize, etc.).
10. Paid software offers an open quid pro quo
Free software isn’t without cost. It may not cost you cash upfront, but there are other ways it either monetizes you or that it may cost you more later. The alternative business models that accompany open source and the deferred and hidden costs may be suitable for you, but it is important to understand them and their effects. If you don’t think about the costs or believe there is no cost, you will likely be caught out later.
While you may not ideally want to pay in cash, I believe that for computation software, it is the most transparent quid pro quo.
“Open source” is often simply a business model that broadly falls into four groups:
Freemium: The freemium model of free core technology with additional paid features (extra libraries and toolboxes, CPU time, deployment, etc.) often relies on your failure to predict your longerterm needs. Because of the investment of your time in the free component, you are “in too deep” when you need to start paying. The problem with this model is that it creates a motivation for the developer toward designs that appear useful but withhold important components, particularly features that matter in later development or in production, such as security features.
Commercial traps: The commercial trap sets out to make you believe that you are getting something for free when you are not. In a sense, the Freemium model sometimes does this by not being upfront about the parts that you will end up needing and having to pay for. But there are other, more direct traps, such as free software that uses patented technology. You get the software for free, but once you are using it they come after you for patent fees. Another common trap is free software that becomes nonfree, such as recent moves with Java, or that starts including nonfree components that gradually drive a wedge of nonfree dependency until the supplier can demand what they want from you.
User exploitation: Various forms of business models center on extracting value from you and your interactions. The most common are serving you ads, harvesting data from you or giving you biased recommendations. The model creates a motivation to design workflows to maximize the hidden benefit, such as ways to get you to see more ads, to reveal more of your data or to sell influence over you. While not necessarily harmful, it is worth trying to understand how you are providing hidden value and whether you find that acceptable.
Free by side effect: Software is created by someone for their own needs, which they have no interest in commercializing or protecting. While this is genuinely free software, the principal motivation of the developer is their own needs, not yours. If your needs are not aligned, this may produce problems in support or development directions. Software developed by research grants has a similar problem. Grants drive developers to prioritize impressing funding bodies who provide grants more than impressing the end users of the software. With most research grants being for fixed periods, they also drive a focus on initial delivery rather than longterm support. In the long run, misaligned interests cost you in the time and effort it takes you to adapt the tool to your needs or to work around its developers’ decisions. Of course, if your software is funded by grants or by the work of publicly funded academics and employees, then you are also paying through your taxes—but I guess there is no avoiding that!
In contrast, the longterm commercial model that Wolfram chooses motivates maximizing the usefulness of the development to the end users, who are directly providing the funding, to ensure that they continue to choose to fund development through upgrades or maintenance. The model is very direct and upfront. We try to persuade you to buy the software by making what we think you want, and you pay to use it. The users who make more use of it generally are the ones who pay more. No one likes paying money, but it is clear what the deal is and it aligns our interest with yours.
Now, it is clearly true that many commercial companies producing paid software have behaved very badly and have been the very source of the “vendor lockin” fear that makes open source appealing. Sometimes that stems from misalignment of management’s shortterm interest to their company’s longterm interests, sometimes just because they think it is a good idea. All I can do is point to Wolfram history, and in 30 years we have kept prices and licensing models remarkably stable (though every year you get more for your money) and have always avoided undocumented, encrypted and nonexportable data and document formats and other nasty lockin tricks. We have always tried to be indispensable rather than “locked in.”
In all cases, code is free only when the author doesn’t care, because they are making their money somewhere else. Whatever the commercial and strategic model is, it is important that the interests of those you rely on are aligned with yours.
Some benefits of our choice of model have included:
 An allinone technology stack that has everything you need for a given task.
 No hidden data gathering and sale or external advertising.
 Longterm development and support.
11. It takes steady income to sustain longterm R&D
Before investing work into a platform, it is important to know that one is backing the right technology not just for today but into the future. You want your platform to incrementally improve and to keep up with changes in operating systems, hardware and other technologies. This takes sustained and steady effort and that requires sustained and steady funding.
Many opensource projects with their casual contributors and sporadic grant funding cannot predict their capacity for future investment and so tend to focus on shortterm projects. Such short bursts of activity are not sufficient to bring large, complex or innovative projects to release quality.
While early enthusiasm for an opensource project often provides sufficient initial effort, sustaining the increased maintenance demand of a growing code base becomes increasingly problematic. As projects grow in size, the effort required to join a project increases. It is important to be able to motivate developers through the lowproductivity early months, which, frankly, are not much fun. Salaries are a good motivation. When producing good output is no longer personally rewarding, opensource projects that rely on volunteers tend to stall.
A successful commercial model can provide the sustained and steady funding needed to make sure that the right platform today is still the right platform tomorrow.
You can see the practical benefit of steady, customerfunded investment in Wolfram technology:
12. Bad design is expensive
Much has been written about how total cost of ownership of major commercial software is often lower than free opensource software, when you take into account productivity, support costs, training costs, etc. While I don’t have the space here to argue that out in full, I will point out that nowhere are those arguments more true than in unified computation. Poor design and poor integration in computation result in an explosion of complexity, which brings with it a heavy price for usability, productivity and sustainability.
Every time a computation chokes on input that is an unacceptable type or out of acceptable range or presented in the wrong conceptualization, that is a problem for you to solve; every time functionality is confusing to use because the design was a bit muddled and the documentation was poor, you spend more of your valuable life staring at the screen. Generally speaking, the users of technical software are more expensive people who are trying to produce more valuable outputs, so wasted time in computation comes at a particularly high cost.
It’s incredibly tough to keep the Wolfram Language easy to use and have functions “just work” as its capabilities continue to grow so rapidly. But Wolfram’s focus on global design ( see it in action) together with high effort on the final polish of good documentation and good user interface support has made it easier and more productive than many much smaller systems.
Summary: Not being open source makes the Wolfram Language possible
As I said at the start, the opensource model can work very well in smaller, selfcontained subsets of computation where small teams can focus on local design issues. Indeed, the Wolfram Technology stack makes use of and contributes to a number of excellent opensource libraries for specialized tasks, such as MXNet (neural network training), GMP (highprecision numeric computation), LAPACK (numeric linear algebra) and for many of the 185 import/export formats automated behind the Wolfram Language commands Import and Export. Where it makes sense, we make selfcontained projects open source, such as the Wolfram Demonstrations Project, the new Wolfram Function Repository and components such as the Client Library for Python.
But our vision is a grand one—unify all of computation into a single coherent language, and for that, the FOSS development model is not well suited.
The central question is, How do you organize such a huge project and how do you fund it so that you can sustain the effort required to design and implement it coherently? Licenses and prices are details that follow from that. By creating a company that can coordinate the teams tightly and by generating a steady income by selling tools that customers want and are happy to pay for, we have been able to make significant progress on this challenge, in a way that is designed ready for the next round of development. I don’t believe it would have been possible using an opensource approach and I don’t believe that the future we have planned will be either.
This does not rule out exposing more of our code for inspection. However, right now, large amounts of code are visible (though not conveniently or in a publicized way) but few people seem to care. It is hard to know if the resources needed to improve code access, for the few who would make use of it, are worth the cost to everyone else.
Perhaps I have missed some reasons, or you disagree with some of my rationale. Leave a comment below and I will try and respond.  ↑ 
6. Computing Exact Uncertainties—Physical Constants in the Current and in the New SIÏò., 29 ìàðòà[−]
Introduction
In the socalled new SI, the updated version of the International System of Units that will define the seven base units (second, meter, kilogram, ampere, kelvin, mole and candela) and that goes into effect May 20 of 2019, all SI units will be definitionally based on exact values of fundamental constants of physics. And as a result, all the named units of the SI (newton, volt, ohm, pascal, etc.) will ultimately be expressible through fundamental constants. (Finally, fundamental physics will be literally ruling our daily life .)
Here is how things will change from the evening of Monday, May 20, to the morning of Tuesday, May 21, of this year.
Computing this table will be the goal of this blog. So, let’s start with a short recap of what will change in the new SI.
In addition to the wellknown exact value of the speed of light, in a few weeks four further physical constants—the Planck constant, the Boltzmann constant, the Avogadro constant and Millikan s constant (more often called elementary charge)—will have exact values. The decision for this change was internationally accepted last November (I wrote about it in my last blog).
Here is the current draft of the SI Brochure. Here is a snippet from page 12 of this document.
Note that in these definitions the decimal numbers are meant as exact decimal numbers, rather than, say, machine numbers on a computer that have a finite precision and are not exact numbers. The Cs?133 hyperfine splitting frequency, the speed of light and the “luminous efficacy” already have exact values today.
The World Discusses the Forthcoming Changes
This change will have some interesting consequences for other physical constants: Some constants that are currently measured and have values with uncertainties will become exact and some constants that currently have exact values will in the future have approximate values with finite uncertainties. These changes are unavoidable to guarantee the overall consistency of the system.
In the first issue of Physics World this year, a letter to the editor by William Gough touched on this subject; he wrote:
With the fixing of the charge on the electron () and the Planck constant (), all the units of physics are now set in stone , which is very satisfying. But it does raise one uncomfortable question. The fine structure constant where is the speed of light and is . From the familiar equations and , we quickly find that . This is of course a pure number with no dimensions, and it has now been set in stone as exactly 1/137.13601, which is very close to the accepted value. This is not surprising since this latter would have been used in the agreed new values for and . But nature has its own value, unknown to us at present, which is therefore set in diamond. We might all be forgiven for implying that we know better than nature. But what if a future theory of the universe becomes accepted, which produces an exact value for which is significantly different from its accepted value? Is this likely to happen? There have been attempts to find a theoretical value for , but they involve fearsome and controversial quantum electrodynamics.
The problem is that in the new SI system, both and will now have inexact values, with some uncertainty. In this blog, we will use the Wolfram Language and its knowledge about physical units and constants to see how these and other physical constants will gain (or lose) uncertainty, and why this is a mathematical consequence of the definition of the base units.
Quick Recap of Relevant Wolfram Language Ingredients
The Wolfram Language is a uniquely suited environment to carry out numerical experiments and symbolic computations to shed some light on the consequences. In addition to its general computation capabilities, three components of the system turn out to be very useful here:
1) The Wolfram Language’s units and physical quantity framework.
“Classical” units (such as meters, feet, etc.) can be used in computations and visualizations. And, of course, in unit conversions.
#10005
Quantity[10, "Meters"]
The conversion to US customary units results in a fraction (not an approximate real number!) due to exactly defined relations of these two units.
#10005
UnitConvert[Quantity[10, "Meters"], "Feet"]
Physicists (especially) like to use “natural” units. Often these natural units are just physical constants or combinations thereof. For instance, the speed of light (here input in natural language form).
#10005
Quantity[1, "SpeedOfLight"]
Expressed in SI units (as it is a speed, the units meters and seconds are needed), the speed of light has an exact value.
#10005
UnitConvert[%, "SIBase"]
The Planck constant, on the other hand, currently does not have an exact value. So its magnitude when expressed in SI base units is an approximate decimal number.
#10005
Quantity[1, "PlanckConstant"]
#10005
UnitConvert[%, "SIBase"]
Note that the precision of the 6.626070… is reflecting the number of known digits.
#10005
Precision[%]
#10005
{Quantity[1,"SpeedOfLight"] ?
UnitConvert[Quantity[1,"SpeedOfLight"],"SIBase"],
Quantity[1,"PlanckConstant"]? UnitConvert[Quantity[1,"PlanckConstant"],"SIBase"]}
#10005
Precision/@Last/@ %
This is the latest recommended value for the Planck constant, published in CODATA 2017 in preparation for making the constants exact. Here is the relevant table:
Physical constants (or combinations thereof) that connect two physical quantities can often be used as natural units. The simplest examples would be to measure speeds in terms of the speed of light or microscopic angular momentum in terms of . Or energy could be measured in terms of mass with an implicit factor of . The function DimensionalCombinations can be used to find combinations of physical constants that allow the connection of two given physical quantities. For instance, the following relations between mass and energy can be constructed:
#10005
DimensionalCombinations[{"Mass","Energy"},IncludeQuantities>"PhysicalConstants",GeneratedParameters>None] //Select[MemberQ[#,"Energy", ?] MemberQ[#,"Mass", ?] ]
The first identity is just Einstein’s famous , the second is the first one in disguise, and the third one is (dimensionally) saying .
2) The "PhysicalConstant" entity type, recently added to the Wolfram Knowledgebase.
Functions and objects in the Wolfram Language are “born computationally,” meaning they are ready be be used for and in computations. But for describing and modeling the real world, one needs data about the real world. The entity framework is a convenient and fully integrated way to get to such data. Here is some data about the electron, the proton and the neutron.
#10005
EntityValue[{Entity["Particle", "Electron"],
Entity["Particle", "Proton"],
Entity["Particle", "Neutron"]}, {EntityProperty["Particle",
"BaryonNumber"], EntityProperty["Particle", "Charge"],
EntityProperty["Particle", "Spin"],
EntityProperty["Particle", "GFactor"],
EntityProperty["Particle", "Mass"],
EntityProperty["Particle", "Radius"]}, "Dataset"]
One of the new kids on the entity type block is physical constants. Currently the Knowledgebase knows about 250+ physical constants.
#10005
EntityValue["PhysicalConstant", "EntityCount"]
Here are a dozen randomly selected examples. There is no clear definition for what exactly constitutes a physical constant: masses of fundamental particles, the parameters of the Lagrangian of the Standard Model, etc. For convenience, the domain also contains astronomical constants according to the Astronomical Almanac.
#10005
RandomEntity["PhysicalConstant",12]
The most fundamental physical constants have been called class C constants in a wellknown paper by JeanMarc L?vyLeblond. Here are the class C and B constants.
#10005
Entity["PhysicalConstant",
EntityProperty["PhysicalConstant", "LevyLeblondClass"] >
"C"] // EntityList
#10005
Entity["PhysicalConstant",
EntityProperty["PhysicalConstant", "LevyLeblondClass"] >
"B"] // EntityList
Take, for instance, the natural unit of time, the Planck time. The functions ToEntity and FromEntity allow one to seamlessly go back and forth between physical constants as units and physical constants as entities. Here is the entity corresponding to the unit Planck time.
#10005
Quantity[1, "PlanckTime"]
#10005
ToEntity[%]
The Knowledgebase has a variety of metainformation about it, e.g. its values in the last CODATA sheets.
#10005
%[EntityProperty["PhysicalConstant", "ValueAssociation"]]
The last output, which contains the value and the uncertainty, brings us to the third important feature that will be useful later:
3) The introduction of the function Around[] in Version 12 of the Wolfram Language. The function Around[] represents an uncertain value with mean and an uncertainty. The arithmetic model of Around[] follows the GUM (Guide to the Expression of Uncertainty in Measurement)—not to be confused with Leibniz’s PlusMinuscalculus. Here is such a value with uncertainty.
#10005
ma=Around[0.99433,0.0292]
The most important and useful aspect of computations with quantities that have uncertainties is that they take correlations properly into account. Naively using such quantities in arithmeticlike numbers or intervals could under or overestimate the resulting uncertainty.
#10005
(ma+1)/(ma+2)
The function AroundReplace[] does take correlation into account.
#10005
AroundReplace[(m+1)/(m+2),m>ma]
Back to the Letter to the Editor
Now let’s use these three ingredients and have a more detailed look at the preceding letter to the editor.
With the current approximate values for and , these two values for the finestructure constant agree within their uncertainties. The first one is the expression from the letter to the editor and the second is the quantity (Quantity[]) representing the finestructure constant.
#10005
{(Quantity[1, "ElementaryCharge"]^2) /(
4 \[Pi] 1/(
Quantity[4 \[Pi] 10^7, ("Henries")/("Meters")] Quantity[1,
"SpeedOfLight"]) Quantity[1,
"ReducedPlanckConstant"]) ,
Quantity[None, "FineStructureConstant"]} // UnitConvert
Every few years, CODATA publishes the official values of the fundamental constants (see here for the finestructure constant); as I’ve mentioned, the values used in the Wolfram Language are these latest CODATA values. The finite uncertainty is reflected in the precision of the number.
Note that the directly measured value of the finestructure constant is a bit more precise than the one that expresses the finestructure constant through other constants.
#10005
Precision /@ %
If we use the forthcoming exact values for and and use the current exact value for , we obtain the following exact value for the finestructure constant in the form .
#10005
With[{e =
Quantity[1602176634/1000000000 10^19, "Coulombs"], \[HBar] =
1/(2 \[Pi]) Quantity[662607015/100000000 10^34,
"Joules" "Seconds"]}, (e^2)/(4 \[Pi] 1/(Quantity[
4 \[Pi] 10^7, ("Henries")/("Meters")] Quantity[1,
"SpeedOfLight"]) \[HBar])]
It is highly unlikely the Lord, who doesn’t even play dice, would choose such a number for the value of in our universe. This means while and will be fixed in the new SI, unavoidably the current exact values for and must be unfixed (see also Goldfarb’s paper about in the new SI). (We will come back to why and must become inexact in a moment.)
This means after May 20 of this year, these outputs will be different.
#10005
{Quantity[1, "MagneticConstant"],
Quantity[1, "ElectricConstant"]} // UnitConvert
#10005
%//UnitSimplify
(In a brief side note, the "PhysicalConstant" entity type also has conjectured values for constants, such as the finestructure constant):
#10005
RandomSample[
"Value" /.
Entity["PhysicalConstant", "FineStructureConstant"][
EntityProperty["PhysicalConstant",
"ConjecturedValueAssociation"]][[All, 2]], 6]
#10005
N[%, 20]
Now, aside from the theological argument about the exact form of the finestructure constant, from a physics point of view, why should and become inexact? As a plausibility argument, let’s look at . One of its most prominent appearances is in Coulomb’s law.
#10005
FormulaData["CoulombsLaw"]
In the current SI, the ampere has an “exact” definition:
The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular crosssection, and placed 1 metre apart in vacuum, would produce between these conductors a force equal to newton per metre of length.
This definition uses the purely mechanical units newton and meter (meaning, after expansion, second, meter and kilogram) to define the ampere. No relation to the charge of an electron is made, and in the current SI, the elementary charge is an experimentally measured quantity.
#10005
Quantity[None, "ElementaryCharge"] // UnitConvert
And this experimentally measured value has changed over the years (and gotten more precise).
#10005
Entity["PhysicalConstant", "ElementaryCharge"][
EntityProperty["PhysicalConstant", "ValueAssociation"]] //
Take[#, 3]
The force on the lefthand side of Coulomb’s law (expressed in newtons) contains the base unit kilogram, which, after fixing the value of the Planck constant becoming constant, is exactly defined too. As there is no reason to assume that the laws of nature can all be expressed in finite rational numbers, the only possible “moving part” in Coulomb’s law will be . Its numerical value has to be determined, and its value will make the lefthand side and the righthand side of Coulomb’s law match up.
From a more fundamental physics point of view, the finestructure constant is the coupling constant that determines the strength of electromagnetic interactions. And maybe one day physics will be able to compute the value of the finestructure constant, but we are not there yet. Just a choice of unit definitions cannot fix the value of .
Now, do both and really become unfixed, or is it possible to keep one of them exact? Because of the already exact speed of light and the relation , if either or is exact, the other one would have to be exact. We know that at least one must become unfixed, so it follows that both must be unfixed.
The values that are now given to the Planck constant, the Boltzmann constant, the Avogadro constant and the elementary charge are neither arbitrary nor fully determined. They are determined to about eight digits, so that the units they define after May 20 match the “size” of the units they define before May 20. But digits further to the right are not determined. So the value of the future exact value of the elementary charge could have been rather than . It’s Occam’s razor and rationality that let us use .
On a more technical level, the slip in the preceding computation was that through the term in the formula implicitly the ampere before the redefinition (remember ) was used, but the exact value of the elementary charge, meaning the definition of the ampere after the redefinition, was also used. And one always has to stay within one unit system.
Computing the Table of UncertaintyOptimized Forms
So, the natural question is what should these “unfixed” values be? In my last blog, I “manually” constructed the new value of . What can be done manually can be done by a computer program, so let’s implement a little program that computes the uncertaintyoptimized form of derived physical constants. In a forwardoriented approach, an entity class of the seven constants that define the new SI is already available.
Here are the constants that will have an exact value in the new SI.
#10005
SIExactConstants =
EntityList[EntityClass["PhysicalConstant", "SIExact"]]
The current values, together with their uncertainty (now using the function Around[]) are the following:
#10005
TextGrid[Join[{Thread[
Style[{"constant", "symbol", "current CODATA value"},
Italic]]}, {#,
FromEntity[#] /. 1 > None, #[
EntityProperty["PhysicalConstant",
"Value", {"Uncertainty" > "Around"}]]} /@ {Entity[
"PhysicalConstant", "PlanckConstant"],
Entity["PhysicalConstant", "BoltzmannConstant"],
Entity["PhysicalConstant", "ElementaryCharge"],
Entity["PhysicalConstant", "AvogadroConstant"]}], Dividers > All,
Background > {Automatic, {LightGray, {None}}}]
The "PhysicalConstant" entity domain allows us to get to the new, forthcoming physical quantity values. Note that like in all computer languages, exact integers and rationals are either explicit integers or rational numbers, but not decimal numbers.
#10005
SIData=EntityValue[SIExactConstants,
{"Entity","Quantity","ValueAssociation"}]/. 1>None;
TextGrid[Join[{Thread[Style[{"constant","symbol","new exact value"},Italic]]},{#1,#2,"Value"/.("SIBrochure2016Draft"/.#3)} @@@SIData],Dividers>All,Background>{Automatic,{LightGray,{None}}}]
#10005
TextGrid[Join[{Thread[Style[{"constant","symbol","new exact value"},Italic]]},{#1,#2,"Value"/.("SIBrochure2016Draft"/.#3)} @@@SIData],Dividers>All,Background>{Automatic,{LightGray,{None}}}]
Many physical constants can be related by equations given by physical theories of different fields of physics. In what follows, we want to restrict ourselves to the theory of fundamental electromagnetic phenomena, in which the uncertainty of constants will be reduced to that of the finestructure constant and the Rydberg constant . If we included, for instance, gravitational phenomena, we would have to use the gravitational constant G, which is independently measured, though it has a very large uncertainty (that is why NSF had the socalled “BigG” Challenge).
In the following, we will restrict ourselves to electric, magnetic and mass quantities whose uncertainties reduce to the ones of and .
Here we use the new function Around to express values with the uncertainties of and .
#10005
\[Alpha] =
Around @@
Entity["PhysicalConstant",
"FineStructureConstant"][{EntityProperty["PhysicalConstant",
"Value"],
EntityProperty["PhysicalConstant", "StandardUncertainty"]}]
#10005
R\[Infinity] =
Around @@
Entity["PhysicalConstant",
"RydbergConstant"][{EntityProperty["PhysicalConstant", "Value"],
EntityProperty["PhysicalConstant", "StandardUncertainty"]}]
The current (CODATA 2014) relative uncertainty of is about and the one from is about . Powers of have a slightly larger uncertainty.
#10005
relativeUncertainty[Around[x_, \[Delta]_]] := \[Delta]/x
relativeUncertainty[Quantity[a_Around, u_]] := relativeUncertainty[a]
relativeUncertainty[x_?ExactNumberQ]:=0
relativeUncertainty[s_String]:=s
#10005
TextGrid[Join[{Thread[
Style[{"combinations", "rel. uncertainty"}, Italic]]},
(relativeUncertainty /@ {\[Alpha], 1/\[Alpha], \[Alpha]^2,
R\[Infinity], R\[Infinity]/\[Alpha]^2}) //
With[{\[Alpha] =
Entity["PhysicalConstant", "FineStructureConstant"],
R\[Infinity] = Entity["PhysicalConstant", "RydbergConstant"]},
Transpose[{{\[Alpha], 1/\[Alpha], \[Alpha]^2, R\[Infinity],
R\[Infinity]/\[Alpha]^2}, NumberForm[#, 2] /@ #}]] ],
Dividers > All, Background > {Automatic, {LightGray, {None}}}]
Here is a plot of the base10 log of the relative uncertainty of as a function of and . For small powers, the relative uncertainty of the product depends only weakly on the exponents and . The plot shows that the dependence of the uncertainty of is dominated by , the exponent of the finestructure constant. This observation is explained by the fact that the uncertainty of the Rydberg constant is 50 times smaller than the one of the finestructure constant.
#10005
Plot3D[relativeUncertainty[?^a R?^b],{a,0, 4},{b,0,4 },
PlotPoints > 20,MaxRecursion>1,
Mesh > None, AxesLabel> {"a","b",None},
ScalingFunctions>{"Linear","Linear","Log"}]
To compute the uncertainties of various constants in the new SI, we will use the following steps:
Retrieve equivalent representations for physical constants available from the "PhysicalConstant" entity domain.
These identities between physical constants are laws of physics, and as such, should hold in the old as well as in the new SI.
Use the relations as a set of algebraic relations and use variable elimination techniques to express a constant through a combination of the seven base constants of the new SI, the finestructure constant and the Rydberg constant .
These are the nine constants that we will allow in the definiens of the constant under consideration. (Technically there are 10 constants in the list, but because of the simple rescaling relation between and there are nine different constants in this list.)
#10005
basicConstantNames =
Join[Last /@ SIExactConstants, {"ReducedPlanckConstant"},
{\
"FineStructureConstant", "RydbergConstant"}]/pre>
The "PhysicalConstant" entity domain has a lot of information about relations between physical constants. For instance, here are equivalent forms of the four constants that currently are measured and soon will be defined to have exact values.
#10005
TextGrid[Join[{Thread[
Style[{"constant", "symbol", "equivalent forms"}, Italic]]},
idTable = {#, FromEntity[#],
Select[#[
EntityProperty["PhysicalConstant", "EquivalentForms"]],
FreeQ[#,
Quantity[_] 
"AtomicSpecificHeatConstant", \[Infinity]] ]} /@ \
{Entity["PhysicalConstant", "PlanckConstant"],
Entity["PhysicalConstant", "BoltzmannConstant"],
Entity["PhysicalConstant", "ElementaryCharge"],
Entity["PhysicalConstant", "AvogadroConstant"]}] /.
Quantity[1, s_String] :> Quantity[None, s], Dividers > All,
Background > {Automatic, {LightGray, {None}}}]
Within the precision of measured values, all these identities hold right now. Here is a quick numerical check for the alternative forms of the Planck constant. But the concrete numerical value, especially the uncertainty, depends on the actual form of the representation. Using Around[], we can conveniently compute the resulting uncertainties.
#10005
toAround[expr_] := expr /. Quantity[x_, u_] :>
x (u /. s_String :> ToEntity[Entity["PhysicalConstant", s]][
EntityProperty[
"PhysicalConstant", "Value", {"Uncertainty" > "Around"}]] )
#10005
Column[Column[{#1, Style[toAround[#1], 10], UnitSimplify@
Activate[ toAround[#1]]}] /@ Last[First[idTable]],
Dividers > All]
Here is a graphical representation of the resulting uncertainties of the various representations. The very large uncertainty of the tenths representation can be traced back to the large uncertainty in the second radiation constant.
#10005
ListPlot[MapIndexed[Callout[{ #1[[2]], #2[[1]]}, #1[[3]] /.
Quantity[1, s_String^e_. /; e >= 0] :> Quantity[None, s]] ,
Sort[Function[f, With[{a = UnitSimplify@
(Activate[ toAround[f]] 
Quantity[165651751/(5/2 10^41), "Joules" "Seconds"] )},
{relativeUncertainty[a], a, f}]] /@
Last[First[idTable]]]],
FrameLabel > Automatic, Frame > True, Axes > True,
AspectRatio > 1]
And, again within the uncertainty of the constants, these relations should continue to hold after the redefinition. Now which of these representations can best be used after the redefinition to minimize the uncertainties? Maybe none of these given equivalents is optimal and by combining some of these representations a better one (meaning one with smaller resulting uncertainty) could be constructed.
Now for the algebraic elimination step, we convert the constants occurring in the equivalent entities (this is easily possible because the second arguments of Entity["PhysicalConstant",.] and Quantity[1,.] are aligned). The reason we use entities rather than quantities in the following computation is twofold: first, the entities are nice, easytoread representations; and second, algebraic functions (like GroebnerBasis) do not look inside quantities to determine the nature of their first argument.
#10005
quantitiesToEntities[expr_] := Activate[expr//. (Quantity[x_,u_] :> x (u /. s_String:> ToEntity[Quantity[1, s]]))/.
Entity["PhysicalConstant","ReducedPlanckConstant"] :> Entity["PhysicalConstant","PlanckConstant"]/(2Pi)]
#10005
toEntityEquation[lhs_, rhs_] := quantitiesToEntities[lhs]==quantitiesToEntities[rhs]
#10005
entitiesToQuantities[expr_] := expr//.Entity["PhysicalConstant",c_]:> Quantity[1,c]
Then we make all identities polynomials. The latter step means: (1) to subtract the lefthand side from the righthand side; and (2) that no fractional powers (e.g. square roots) of constants appear any longer. We realize this transformation to a polynomial by looking for all fractional exponents and finding the LCM of all their denominators.
#10005
toPolynomial[a_==b_] :=
Module[{exp=LCM@@Denominator[
Cases[b,(_?(MemberQ[#,_Entity,{0, ?}] ))^e_.:>e, ?]]},
Numerator[Together[a^expb^exp]]]
Here is one of the preceding equations that contains constants to a fractional power.
#10005
toEntityEquation[Quantity[1, "ElementaryCharge"], 1/Sqrt[3]\!\(\*
TagBox["*",
"InactiveToken",
BaseStyle>"Inactive",
SyntaxForm>"*"]\)\[Pi]\!\(\*
TagBox["*",
"InactiveToken",
BaseStyle>"Inactive",
SyntaxForm>"*"]\)Sqrt[(Quantity[1, ("BoltzmannConstant")^2])\!\(\*
TagBox["*",
"InactiveToken",
BaseStyle>"Inactive",
SyntaxForm>"*"]\)(Quantity[1, 1/("LorenzNumber")])]]
After polynomialization, we have a multivariate polynomial in the three occurring constants. These polynomials have to vanish identically.
#10005
toPolynomial[%]
The next table shows the function toPolynomial applied to the equivalent forms shown earlier for the elementary charge. After canonicalizing to , some of the resulting polynomials are identical.
#10005
TextGrid[Join[{Thread[
Style[{"identity", "polynomial form"}, Italic]]},
With[{lhs = idTable[[3, 2]], eforms = idTable[[3, 3]]},
{lhs == #, toPolynomial[toEntityEquation[lhs, #]]} /@
eforms]],
Dividers > All, Background > {Automatic, {LightGray, {None}}}]
Now, given any physical constant (not one of the constants used in defining the new SI), we retrieve a sufficient amount of equivalent forms to form a set of equations.
#10005
getAllRelations[constant : Entity["PhysicalConstant", c_]] :=
Module[{eforms, eforms2, eformsA, eformsB, missing},
eforms = constant[{"Quantity", "EquivalentForms"}];
missing =
Complement[Cases[Last[eforms], _String, \[Infinity]],
basicConstantNames];
eforms2 = Entity["PhysicalConstant", #][
{"Quantity", "EquivalentForms"}] /@
missing;
eformsA = Flatten[Function[{c1, cList},
toEntityEquation[c1, #] /@ cList] @@@
Join[{eforms}, eforms2]];
eformsB = toPolynomial /@ eformsA;
Select[eformsB, FreeQ[#, _Quantity, \[Infinity]] ] //
DeleteDuplicates
]
Here is the list of resulting polynomial equations for expressing the elementary charge.
#10005
relationList =
getAllRelations[Entity["PhysicalConstant", "ElectronMass"]]
We will express all uncertainties in terms of the uncertainties of and . Just these two constants do suffice to express the uncertainty of many physical constants. And because their uncertainties are independent from each other and because the uncertainties are quite small, these two approximately known constants are best suited to express the uncertaintyoptimized version of many physical constants. And, of course, we allow all seven exact constants from the new SI; because they are exact quantities, their presence will not change uncertainties.
#10005
alpha = Entity["PhysicalConstant", "FineStructureConstant"];
rydbergR = Entity["PhysicalConstant", "RydbergConstant"] ;
The main work of expressing a given constant through the SI constants and and will be done by the function GroebnerBasis. The option setting MonomialOrder EliminationOrder is the crucial step that removes all physical quantities “not wanted,” leaving one polynomial equation with the exactly defined constants and (if needed) the finestructure and the Rydberg constants.
#10005
minimalEquation[constant_] :=
Module[{relationList, keep, remove, gb},
relationList = getAllRelations[constant];
keep = Join[SIExactConstants, {constant, alpha, rydbergR}];
remove =
Complement[Union[Cases[relationList, _Entity, \[Infinity]]],
keep];
gb = GroebnerBasis[relationList, keep, remove ,
MonomialOrder > EliminationOrder];
positivity = And @@ ((# > 0) /@ Cases[gb, _Entity, \[Infinity]]);
If[gb =!= {},
constant == (constant /.
(Solve[
First[gb] == 0 \[And] positivity, constant][[
1]] //
Simplify[#, positivity] )), {}]
]
Carrying out the constantelimination process on the electron mass, we obtain .
#10005
minimalEquation[Entity["PhysicalConstant", "ElectronMass"]]
#10005
entitiesToQuantities[Inactivate[Evaluate[%],Equal]]
The uncertainty of this expression comes from the term . We define a function that extracts the term that causes the uncertainty.
#10005
uncertaintyFactor[expr_] :=
If[FreeQ[expr, alpha  rydbergR, \[Infinity]], "exact",
(expr /.
Entity[_,
Except["FineStructureConstant"  "RydbergConstant"]] :>
1) //.
a_?NumericQ e_Entity^exp_. :> e^exp] /. e_Entity :> FromEntity[e]
#10005
uncertaintyFactor[(
2 Entity["PhysicalConstant", "PlanckConstant"] Entity[
"PhysicalConstant", "RydbergConstant"])/(
Entity["PhysicalConstant", "FineStructureConstant"]^2 Entity[
"PhysicalConstant", "SpeedOfLight"])]
For a more compact display, we define a function that returns the equivalent form, and the old and new uncertainties as a row.
#10005
newSIRow[a_ == b_] :=
With[{dom = uncertaintyFactor[b]},
{a, entitiesToQuantities[a] /. Quantity[1, u_] :> Quantity[None, u],
entitiesToQuantities[b] /. Quantity[1, u_] :> Quantity[None, u],
Which[a["StandardUncertainty"] === Missing["Exact"], 0,
a["StandardUncertainty"] === Missing["NotAvailable"],
NumberForm[10^Precision[QuantityMagnitude[
UnitConvert[a["Value"]]]], 2],
True,
NumberForm[a["StandardUncertainty"]/a["Value"], 2]],
NumberForm[
relativeUncertainty[dom /. HoldPattern[Quantity[x_, p_]] :>
x (p /. {"FineStructureConstant" > \[Alpha],
"RydbergConstant" > R\[Infinity]})], 2],
dom}]
We end with a table of the old and new uncertainties for more than a dozen physical constants. We select this list as a representative example; other constants could be treated in a similar fashion (this would potentially require adding further imprecise constants to be preserved, such as the gravitational constant or parameters of the standard model).
#10005
constantList = {Entity["PhysicalConstant", "MagneticConstant"],
Entity["PhysicalConstant", "ElectricConstant"],
Entity["PhysicalConstant", "VacuumImpedance"],
Entity["PhysicalConstant", "JosephsonConstant"],
Entity["PhysicalConstant", "VonKlitzingConstant"],
Entity["PhysicalConstant", "MolarGasConstant"],
Entity["PhysicalConstant", "FaradayConstant"],
Entity["PhysicalConstant", "ClassicalElectronRadius"],
Entity["PhysicalConstant", "ElectronMass"],
Entity["PhysicalConstant", "BohrRadius"],
Entity["PhysicalConstant", "BohrMagneton"],
Entity["PhysicalConstant", "MolarPlanckConstant"],
Entity["PhysicalConstant", "ElectronComptonFrequency"],
Entity["PhysicalConstant", "SchottkyNordheimConstant"]};
Combining the rows into a table yields the following result for optimal representations of these constants within the new SI.
#10005
TextGrid[Prepend[newSIRow[minimalEquation[#]] /@ constantList,
Style[#, Gray, Italic] /@ {"constant", "symbolc", "value",
"current\nuncertainty",
"new SI\nuncertainty", "new SI\nuncertainty\ncause"} ] /.
"exact" :> Style["exact", Gray], Dividers > All,
Background > {None, {{LightGray, Sequence @@ Table[None, 15]}}}]
This was the table we set out to derive and we succeeded in deriving it. Note the appearance of in the numerator and denominator of and in such a way that after cancellation the product becomes . A similar list can be found at the bottom of the Wikipedia page about the redefinition of the SI units.
Now we can look forward to World Metrology Day 2019 for a fundamentally better world through fundamental constants.
Download this post as a Wolfram Notebook.
Mathematica 12 significantly extends the reach of Mathematica and introduces many innovations that give all Mathematica users new levels of power and effectiveness.
Buy now!
 ↑ 
7. Peppa Pig, Tracking Meteorite Trajectory and Computational Linguistics: Wolfram Community Highlights×ò., 21 ìàðòà[−]
Over the past 16 weeks, Wolfram Community has gained over 1,000 new members—surpassing 21,000 members total! We’ve also seen more activity, with 800,000 pageviews and 160,000 new readers in that time period. We enjoy seeing the interesting and unique projects Wolfram Language users come up with and are excited to share some of the posts that make Wolfram Community a favorite platform for sharing, socializing and networking.
Jofre Espigule
To mark UNESCO International Mother Language Day on February 21, Wolfram’s Jofre Espigule celebrated by exploring different mother tongues using the Wolfram Language’s LanguageData function. Jofre illustrates how wellprotected languages have a bigger internet presence, displayed through Wikipedia articles and reflected in the state of different languages in countries of varied economic development.
Erik Mahieu
Mirror anamorphism is, effectively, “what you get is not what you see.” To demonstrate anamorphic movies, Erik Mahieu uses Bugs Bunny, the Wolfram Language and a curved cylindrical mirror to show how distorted images can be projected to show beloved childhood cartoon characters (or whatever your imagination comes up with!).
Jeff Bryant
On February 1, 2019, a meteor exploded over western Cuba in the area of Vi?ales, where meteorites were found. Wolfram’s Jeff Bryant analyzes data gathered by the American Meteor Society to estimate the meteorite’s trajectory and impact point, as well as the area of observability (defined by the altitude at which the meteor was estimated to have exploded), among other things.
Shenghui Yang
Can Peppa Pig from the famed British children’s TV series teach us a few things about topology? Enchanted by math and aided by Wolfram’s Shenghui Yang, Peppa Pig transports from her usual peaceful (and 2D) village in Euclidean space into a magical place of smooth, curved manifolds. Adventurous Peppa goes through surfaces of cylinders and M?bius strips, and with the help of the Wolfram Language, she masters the powerful magic of manifold orientation.
Anmol Singh
When Anmol Singh, a highschool student from New Delhi, volunteered for community service to teach basic English vocabulary to underprivileged kids, he quickly realized that standard teaching methods couldn’t hold their attention for very long. He implemented the Wolfram Language to create educational games, such as a variation of Hangman and a new game he calls “The ThreeWord Thriller.” Using Wolfram Languagebased games accelerated their learning process and exposed them to a level of technology they had never seen before.
Silvia Hao
There isn’t much that’s more relaxing than watching stars reflecting off a body of water, and Silvia Hao, CEO of Glimscape Technology, shows how you can conveniently do it on your computer screen.
If you haven’t yet signed up to be a member of Wolfram Community, please do so! You can join in on similar discussions, post your own work in groups of your interest and browse the complete list of Staff Picks.
 ↑ 
8. 3D Printing “Spikey” Commemorative Coins with the Wolfram Language×ò., 14 ìàðòà[−]
I approached my friend Frederick Wu and suggested that we should make a physical Wolfram Spikey Coin (not to be confused with a Wolfram Blockchain Token!) for the celebration of the 30th anniversary of Mathematica. Frederick is a longterm Mathematica user and coin collector, and together, we challenged ourselves to design our own commemorative coin for such a special event.
The iconic Spikey is a lifelong companion of Mathematica, coined (no pun intended) in 1988 with the release of Version 1. Now, we’ve reached a time in which Wolfram technologies and different 3D printing processes happily marry together to make this project possible!
Getting Started
Traditional coincasting uses lowrelief design with an orthographic projection, giving viewers the impression of a distinct 3D image with minimum model depth. Usually, the relief depth plane can be set between the front plane of the object and the vanishing plane. A low relief compresses the model in the axial direction (perpendicular to the background plane), with a scale ratio ranging from 0.02 to 0.1, a high relief from 0.1 to 0.2 and a superhigh relief greater than 0.3.
We crafted a Wolfram Demonstrations Project Applet ( Design Your Own Commemorative Coin) to illustrate some cool coin designs using the aforementioned orthogonal projection and 3D geometric scaling method. The user can freely set the view point, the level of relief plane and the scaling ratio.
The following image illustrates the visual effect of relief design; the gray impression on the left appears as though it were created by the Spikey punching through the coin (similar to aligning a palm with the impression after a face slap):
We can quantify the scaling effect graphically by comparing the outline of the original Spikey shape to that of the projection. Using SetOptions, we can give both shapes the same ViewPoint, PlotRange and ImageSize, additionally setting BaseStyle and Lighting so that only the 2D outline is displayed:
✕
SetOptions[{Region},Boxed>False,ViewPoint>Top,BaseStyle>Gray,
Lighting>{"Directional",White},PlotRange>{{2,2},{2,2},{2,2}},ImageSize>400];

Running the following code generates three graphics—a 2D projection of the “real” 3D Spikey object, a relief model with the same view point but “squeezed” along the central view point vector and the image pixel difference between the two shapes:
✕
threeDim=PolyhedronData["Spikey","BoundaryMeshRegion"];
reliefDim=TransformedRegion[threeDim,ScalingTransform[.02,{0,0,1},{0,0,0}]];
Grid[{
{"real 3D object","relief model","image pixel difference"},
{#,diff=ColorNegate[ImageDifference@@#]}&[Rasterize[Region[#]]&/@{threeDim,reliefDim}]//Flatten}]

The accuracy of the model can also be analyzed numerically by measuring the pixel error in the boundary. For instance, a model compressed at scaling ratio 0.02 viewed from an angle less than 10° from the center vector produces only a 3.3% pixel error:
In other words, the relief model used 2% of the depth of the 3D object to create a 96.7% 3D effect.
Modeling the Coin
We have come a long way, but the job is not finished yet. There is some clearance between the Spikey and the coin body, so I need to fill the gap in geometry. First, I get the Spikey region model, rotating it a bit to create a nonsymmetric pattern (for artistic reasons):
✕
SetOptions[{Region},ViewPoint>Top,BaseStyle>{Pink,EdgeForm[None]},PlotRange>All,AspectRatio>1,Lighting>"Neutral",ImageSize>250];
SpikeyRegion=PolyhedronData["Spikey","BoundaryMeshRegion"];
SpikeyRegion3D=TransformedRegion[SpikeyRegion,Composition[
RotationTransform[E,{E,Pi,E}],ScalingTransform[12 {1,1,1},{0,0,0}]]];

✕
Grid[{
{"Spikey Top View","Spikey Bottom View"},
Table[Region[SpikeyRegion3D,ViewPoint>v],{v,{Top,Bottom}}]},
BaseStyle>"Text"]

Using ConvexHullMesh, we can generate prismlike polyhedrons by stretching each triangular face along the direction:
✕
convexhullMesh=ConvexHullMesh[Join[#, Transpose[Transpose[#]+{0,0,50}]]]&/@MeshPrimitives[SpikeyRegion3D,2][[All,1]];
Multicolumn[convexhullMesh[[;;16]],8, ItemSize>{3,6}]

Now, use RegionUnion repeatedly to join the generated prismlike polyhedrons together. Since not all regions touch, we also need BoundaryDiscretizeRegion to help fill in the gaps. The result is a region with the front and back geometry of a Spikey, but stretched (extruded) in the direction:
✕
regionUnion1 = Table[BoundaryDiscretizeRegion@RegionUnion@
Take[convexhullMesh, {3 (i  1) + 1, 3 i}], {i, 20}];
regionUnion2 =
Table[RegionUnion @@ Take[regionUnion1, {5 (i  1) + 1, 5 i}], {i,
4}];
convexhullUnion = RegionUnion @@ regionUnion2

Next we prepare a coin body with an outside protective ring. This is done by first multiplying an Annulus by a Line to get a tubelike shape:
✕
{r1,r2}={21,23};
annulus=BoundaryDiscretizeGraphics[Graphics[Annulus[{0,0},{r1,r2}]],MaxCellMeasure>.1];
tube=BoundaryDiscretizeRegion[RegionProduct[annulus,Line[25+{{41.5},{41.5}}]], MaxCellMeasure>Infinity]

Then we can fill the interior of the coin with a solid disk:
✕
assembly = RegionUnion[BoundaryDiscretizeRegion[RegionProduct[
BoundaryDiscretizeGraphics@Graphics[Disk[{0, 0}, 22.5]],
Line[25 + {{7.5}, {7.5}}]], MaxCellMeasure > Infinity], tube]

Finally, we compress the 3D extruded Spikey into a relief model:
✕
SpikeyRelief =
TransformedRegion[convexhullUnion,
ScalingTransform[.02, {0, 0, 1}, {0, 0, 0}]]

Similarly, we compress the 3D pulled coin into a coin model. Usually, the coin ring is slightly thicker than the relief height so the outside ring can protect the relief patterns and resist abrasion. We set a slightly larger scaling factor of .03 to account for this:
✕
Coin=TransformedRegion[assembly,ScalingTransform[0.03,{0,0,1},{0,0,0}]]

Finally, we combine the regions to make the Spikey coin model:
✕
SpikeyCoinRegion=Show[{SpikeyRelief,Coin}]

Viewing in 3D
Let’s take a glance at the whole model. An important concept in coin design is “breakthrough” or “penetration,” i.e. the illusion that the Spikey breaks or travels through a coin plate in space and time. We can visualize this using the uncompressed convex hull data:
✕
convexhullData=MeshPrimitives[#,2]&/@convexhullMesh;

✕
Graphics3D[{convexhullData,Opacity[.5],Red,Cylinder[{{0,0,258},{0,0,25+8}},r1],Blue,Opacity[.2],EdgeForm[None],MeshPrimitives[tube,2]},Axes>True,ImageSize>{600,400}]

Using a scale ratio of 0.025, we can then compress the 3D components and combine them into a styled Graphics3D object:
✕
scale=ScalingTransform[.025,{0,0,1}];
subject=GeometricTransformation[{convexhullData},scale];
body=GeometricTransformation[{Cylinder[{{0,0,17.5},{0,0,32.5}},r1]},scale];
ring=GeometricTransformation[{MeshPrimitives[tube,2]},scale];
coin3D=Graphics3D[
{EdgeForm[None],ColorData["Atoms"]["Au"],subject,White,body,ring},Lighting>Red,Boxed>False]

Viewing the coin from a few different angles shows the “breakthrough” effect in context. The two sides of the coin pattern look similar, but they are actually the top view {0, 0, ?} and the bottom view {0, 0, ?} of the same Spikey:
✕
vp={{0,10,4},{10,0,4},{0,0,?},{0,0,?},{1,.1,2},{1,.1,2}};

✕
Multicolumn[Table[Graphics3D[{EdgeForm[None],Specularity[Brown,100],ColorData["Atoms"]["Cu"],subject,LightBlue,Specularity[Red,100],body,Opacity[If[i==1i==2,.01,0.9]],ring},
Axes>(i(iAutomatic,
AxesLabel>{"x","y","z"},ViewPoint>vp[[i]],ImageSize>220],{i,Length@vp}],2,Appearance>"Horizontal",Spacings>5,Alignment>Center]

3D Printing Methods and Materials
To 3D print the coin, we need to start with a highquality model. Although Graphics3D is convenient for visualizing the result, it doesn’t translate well to STL. When exporting with Printout3D, you can click to expand the “Report” element for information about the final model quality:
✕
Printout3D[coin3D, "Coin.stl", RegionSize>Quantity[40, "Millimeters"]]

Errors in discretization can lead to a final print that looks distorted, with faces glued together:
Using Region (as outlined above) produces a more strictly defined object for higherquality STL export:
✕
Printout3D[SpikeyCoinRegion, "Coin.stl", RegionSize>Quantity[46, "Millimeters"]]

For a thin model, horizontal placement (i.e. with the background plane flat on the printing table) results in a poor print resolution. Vertical or tilted placement helps to increase printable layers and improve detail resolution in the relief:
FDM (Fused Deposition Modeling), the most widely used 3D printing technology, works by applying successive layers of thermoplastics. This method is low cost, but it also has a relatively low accuracy:
SLA (stereo lithography) is a 3D printing technique using ultraviolet light to cure photosensitive polymers, resulting in smoother, more accurate prints than FDM:
PBF (powder bed fusion) involves applying heat to fuse together successive layers of powdered material. This process is more accurate still—though also quite expensive—and it allows the use of metallic materials. I created a stainless steel powder print with the German EOS M 290, a milliondollar piece of equipment with advanced additive manufacturing technology:
The printed coin has a 40 mm outside diameter and a 3 mm thickness (with the thinnest region of the coin plate being only 0.5 mm), weighing about 15 grams:
As you can see from the images, the relief pattern is clearly distinguishable, with all faces achieving diffuse reflection:
Try It Yourself
Although not everyone owns a 3D printer, many schools, libraries and makerspaces now offer 3D printing services at some level. The descriptions provided in this post should give you a good idea of what processes and materials will work for your print.
If you don’t have access to a printer locally, you can still produce your own coin using one of the online printing services available through Printout3D:
✕
Printout3D[SpikeyCoinRegion, "Sculpteo",
RegionSize > Quantity[46, "Millimeters"]]

These services allow you to rescale your model and select from a range of printing processes and materials:
With the Wolfram Language, anyone can go from 3D model to shiny coin in no time. Give it a try—soon you could have your very own Spartan army of Spikeys:
Download this post as a Wolfram Notebook.
This post originated on Wolfram Community. Have interesting 3D printing projects made using the Wolfram Language? Share them here and look at similar projects!  ↑ 
9. Shattering the Plane with Twelve New Substitution Tilings Using 2, ?, ?, ?, ?×ò., 07 ìàðòà[−]
Similar Triangle Dissections
Version 12 of the Wolfram Language introduces solvers for geometry problems. The documentation for the new function GeometricScene has a neat example showing the following piece of code, with GeometricAssertion calling for seven similar triangles:
✕
o=Sequence[Opacity[.9],EdgeForm[Black]];plasticDissection=RandomInstance[GeometricScene[{a,b,c,d,e,f,g},{
a=={1,0},e=={0,0},Line[{a,e,d,c}],
p0==Polygon[{a,b,c}],
p1==Style[Polygon[{b,d,c}],Orange,o],
p2==Style[Polygon[{d,f,e}],Yellow,o],
p3==Style[Polygon[{b,f,d}],Blue,o],
p4==Style[Polygon[{g,f,b}],Green,o],
p5==Style[Polygon[{e,g,f}],Magenta,o],
p6==Style[Polygon[{a,e,g}],Purple,o],
GeometricAssertion[{p0,p1,p2,p3,p4,p5,p6},"Similar"]}],RandomSeeding>28]

The coordinates of the point use the plastic constant , the real root of .
✕
{Chop[c/.plasticDissection["Points"]],N[Sqrt[{8,12,8}.(?^{0,1,2})]]}

Combinations of (rho) build the entire triangle, putting it in the algebraic number field . Call this the dissection. The length of an edge labeled is .
✕
dissectionDiagram[SqrtRho,400]

In the initialization section of the notebook, SqrtRho is defined to be the list consisting of the root, the vertices in terms of that root, the subtriangles and the symbol. The function dissectionDiagram uses these values to draw the triangles with the edge lengths equal to powers of .
The Cartesian coordinates can be found with SqrtSpace, defined in the initialization section.
✕
N[SqrtSpace[SqrtRho[[1]],SqrtRho[[2]]]]

Pisot Numbers
The plastic constant is the smallest Pisot number, an algebraic integer greater than 1 with conjugate elements in the unit disk. Here are the first four and the ninth Pisot numbers, showing the value as a point outside and the conjugate elements inside.
✕
Text[Grid[Transpose[{Style[TraditionalForm[#],10],Max[N[Norm/@(x/.Solve[#==0])]],
Graphics[{Point[ReIm/@(x/.Solve[#==0])], Circle[{0,0},1],Red,
Disk[{0,0},.05]}, ImageSize> 95]}&/@{1x+x^3,1x^3+x^4,1+x^2x^3x^4+x^5,1x^2+x^3,1x^2x^4+x^5}],Frame> All]]

The second Pisot number has a real conjugate element (chi), with .
✕
Grid[{N[#,15],TraditionalForm[MinimalPolynomial[#,x]]}&/@{Root[1#1^3+#1^4 &,2], 1/Root[1#1^3+#1^4 &,1]}, Frame> All]

Here is the second neat example from the documentation mentioned in the opening paragraph. Decompose a polygon into similar triangles:
✕
RandomInstance[GeometricScene[{a, b, c, d, e, o},
{Polygon[{a, b, c, d, e}],
p1 == Style[Triangle[{a, b, o}], Red],
p2 == Style[Triangle[{b, o, c}], Blue],
p3 == Style[Triangle[{c, d, o}], Yellow],
p4 == Style[Triangle[{d, o, e}], Purple],
p5 == Style[Triangle[{e, o, a}], Orange],
GeometricAssertion[{p1, p2, p3, p4, p5}, "Similar"] } ], RandomSeeding > 6]

That solution can be extended to nine similar triangles.
✕
o=Sequence[Opacity[.9],EdgeForm[Black]];RandomInstance[GeometricScene[{a,b,c,d,e,f,g,h,i},{
h=={0,0},d=={1,0},p0==Polygon[{d,a,i}],
p1==Style[Polygon[{a,b,f}],Magenta,o ],
p2==Style[Polygon[{b,f,g}],Yellow,o],
p3==Style[Polygon[{f,g,e}],Purple,o],
p4==Style[Polygon[{e,g,h}],Blue,o],
p5==Style[Polygon[{h,c,g}],Cyan,o],
p6==Style[Polygon[{d,h,c}],Red,o],
p7==Style[Polygon[{c,g,b}],Orange,o],
p8==Style[Polygon[{a,e,i}],Green,o],
GeometricAssertion[{p0, p1,p2,p3,p4,p5,p6,p7,p8},"Similar"],
Line[{a,b,c,d}],Line[{a,f,e}],Line[{i,e,h,d}]}],RandomSeeding>85 ]

These triangles are built with ; call this the dissection. The length of an edge is , where is the label on the edge.
✕
dissectionDiagram[SqrtChi, 480]

The Golden and Supergolden Ratios
Related to and is the golden ratio, introduced in the book Liber Abaci (1202) by Leonardo Bonacci of Pisa. Early in the book is the Arabic number system.
Later in Liber Abaci is the rabbit problem, leading to what is now called the Fibonacci sequence. The name “Fibonacci” was created in 1838 from “filius Bonacci” or “son of Bonacci.”
This shows the Fibonacci rabbit sequence and its relation to (phi), the golden ratio.
✕
?=GoldenRatio;
rabbitseq=LinearRecurrence[{1,1},{1,1},120];
Grid[{
{"rabbit sequence",Row[Append[Take[rabbitseq,16],"…"],","]},
{"successive ratios",N[rabbitseq[[1]]/rabbitseq[[2]],50]},
{"golden ratio",N[?,50]}
},Frame>All]

In 1356, Narayana posed the following question in his book Ganita Kaumudi: “A cow gives birth to a calf every year. In turn, the calf gives birth to another calf when it is three years old. What is the number of progeny produced during twenty years by one cow?”
We can use Mathematica to show the Narayana cow sequence and its relation to (psi), the supergolden ratio.
✕
?=Root[1#1^2+#1^3&,1];
cowseq=LinearRecurrence[{1,0,1},{1,2,3},180];
Grid[{
{"cow sequence",Row[Append[Take[cowseq,16],"..."],","]},
{"successive ratios",N[cowseq[[1]]/cowseq[[2]],50]},
{"supergolden ratio",N[?,50]}
},Frame>All]

The ratios of consecutive terms in the Padovan sequence and Perrin sequence both tend to , as shown in Fibonacci and Padovan Spiral Identities and Padovan’s Spiral Numbers. This shows these two sequences with the rabbit and cow sequences:
✕
padovan=LinearRecurrence[{0,1,1}, {2,2,3}, 15];perrin=LinearRecurrence[{0,1,1}, {2,3,2}, 15];
narayana=LinearRecurrence[{1,0,1},{1,2,3},15];
fibonacci=LinearRecurrence[{1,1},{1,1},15];
Column[{TextGrid[Transpose[{{Padovan,Perrin,Narayana, Fibonacci},
Row[Append[#,"…"],","]&/@{padovan,perrin,narayana, fibonacci}}],Frame> All],
ListLinePlot[Rest[#]/Most[#]&/@{padovan,perrin,narayana, fibonacci},
PlotRange>{1,2},GridLines>{{}, {?,?,?}},ImageSize> Medium]}]

The powers , and of the golden ratio are the side lengths of the Kepler right triangle. The golden ratio (or Fibonacci rabbit constant) is the Pisot number . By using Pisot numbers (the plastic constant), , (the supergolden ratio) or , the same process makes a 120° angle, as does , the tribonacci constant.
✕
Grid[Partition[niceTriangle/@nice[[{2,11,13,8,14,12}]],2]]

Almost all the Platonic and Archimedean solids can be built with the octahedral group acting on or the icosahedral group acting on . Exceptions:
 The snub cube needs a root of (tribonacci constant).
 The snub dodecahedron neets a root of
 The snub icosidodecadodecahedron needs an element of (not shown).
This builds the first two snubs with vertex coordinates in the given algebraic number fields.
✕
scroot=Root[1#1#1^2+#1^3&,1];
{scv,scf}=Normal[]; scp=SqrtSpace[scroot, scv];
sdroot=Root[GoldenRatio#1#1^2+#1^3&,1];
{sdv,sdf}=Normal[];sdp=SqrtSpace[sdroot, sdv];
GraphicsRow[{
Graphics3D[{EdgeForm[Thick],Opacity[.9],Polygon[scp[[#]]]&/@scf}, Boxed> False, ViewAngle> Pi/9],Graphics3D[{EdgeForm[Thick],Opacity[.9],Polygon[sdp[[#]]]&/@sdf}, Boxed> False,ViewAngle> Pi/10]},ImageSize> 530]

If two roots have the same discriminant ( ) they usually belong to the same algebraic number field. Here are two polynomials for the tribonacci constant.
✕
Grid[{TraditionalForm[MinimalPolynomial[#,x]],NumberFieldDiscriminant[#]}&/@{Root[1#1#1^2+#1^3&,1], Root[2+2#12#1^2+#1^3&,1]},Frame> All]

The tribonacci constant is part of an odd series of polynomials tying together Heegner numbers and the jfunction in a way that leads to extreme almost integers in multiple ways.
The silver ratio also leads to interesting geometry. If a sheet of A4 paper is folded in half, the rectangles are similar to the original rectangle. The A4 rectangle can be perfectly subdivided into smaller distinct A4 rectangles in many strange ways. The values 2, , and are all involved in dissections of squares and similar rectangles.
✕
Grid[{{A4rect,psirect},{goldenrect, plasrect}}]

These dissections can be found in Version 12.
✕
o=Sequence[Opacity[1],EdgeForm[Black]];RandomInstance[GeometricScene[{a,b,c,d,e,f,g, h},{
a=={0,0},d=={0,1},
p01==Polygon[{g,h,a}],p02==Polygon[{a,e,g}],
p11==Style[Polygon[{c,b,a}],Orange,o],
p12==Style[Polygon[{a,d,c}],Orange,o],
p21==Style[Polygon[{c,d,e}],Yellow,o],
p22==Style[Polygon[{e,f,c}],Yellow,o],
p31==Style[Polygon[{h,b,f}],LightBlue,o],
p32==Style[Polygon[{f,g,h}],Blue,o],
GeometricAssertion[{p01,p02,p11,p12,p21,p22,p31,p32},"Similar"]}],RandomSeeding>7]

The third root leads to solutions for the diskcovering problem and the Heilbronn triangle problem.
✕
Row[{Melissen12, Heilbronn12}]

Infinite Series Series
Many of the numbers introduced so far can be expressed as infinite series of negative powers of themselves.
A 45° right triangle of area 2 can be used to prove the first series by splitting smaller and smaller similar triangles. Or use the infinite similar triangle dissection shown here.
✕
start=SqrtSpace[2,{{{18},{0}},{{18},{0}},{{9},{63}}/8,{{18},{0}},{{81},{63}}/32}];
Graphics[{Line[Join[start,Flatten[Table[{(2^a 1)/2^a start[[1]]+(1)/2^a start[[2]],
(2^a 1)/2^a start[[1]]+(1)/2^a start[[5]]},{a,1,7}],1]]] }]

The infinite series for can also be illustrated with an infinite set of similar triangles.
✕
forphi=Normal[List];
Graphics[{EdgeForm[Black],{Lighter[Hue[10.8Area[Polygon[#]]]], Polygon[#],Black,
With[{log=Round[Log[GoldenRatio,Area[Polygon[#]]]]},If[log>10,Text[log,Mean[#]]]]}&/@forphi}]

The infinite series for can be illustrated with an infinite set of similar Rauzy fractals.
✕
r=Root[1#1^2+#1^3&,3];Cow[comp_]:=Map[First,Split[
Flatten[RootReduce[Map[Function[x,x[[1]]+(x[[2]]x[[1]]){0,r^5,r^5+1,1}],Partition[comp,2,1,1]]]]]];
poly2=Table[ReIm[Nest[Cow,N[RootReduce[r^({4,1,3,5}+n){1,1,1,1}],50],3]],{n,1,30}];frac?=Graphics[{EdgeForm[{Black}],Gray,Disk[{0,0},.1],
Map[Function[x,{ColorData["BrightBands"][N[Norm[Mean[x]]3]],Polygon[x]}],poly2],Black, Inset[Style[Row[{"? =",Underoverscript["?","n=2", "?"],Superscript["?","n"]}],20],{1/3,1/3}]}]

The infinite series for can be illustrated with an infinite set of similar fractals.
✕
?=Root[1#1+#1^3&,3]; iterations=4;big=?^{5,8,6,9,4,7};par=?^{6,9,7,10,5,8};
wee=?^4{?^3,?^8,?^5,?^6(2?^2 1),1,?^2 (2?^3 1)};
plastic[comp_]:=Map[First,Split[Flatten[RootReduce[Map[Function[x,
{x[[1,1]]+(x[[1,2]]x[[1,1]]) {0,?^5,1},x[[2,1]]+(x[[2,2]]x[[2,1]]) {0,1?^5,1}}],Partition[Partition[comp,2,1,1],2]]]]]];
poly=Table[{Hue[Pi n],Polygon[ReIm[Nest[plastic,wee ?^(n1),iterations]]]},{n,1,20}];Graphics[{EdgeForm[Black],poly, Inset[Style[Row[{"? =",Underoverscript["?","n=4", "?"],Superscript["?","n"]}],20],{5/12,1/3}] }, ImageSize> 500]

Here is that grid of values again.
Iterated Dissections
It turns out these “selfsumming” infinite series also have unusual selfsimilar triangle dissections, first hinted at in Wheels of Powered Triangles.
✕
Column[dissectionDiagram[#, 530]&/@{SqrtTwo,SqrtPhi,SqrtPsi,SqrtChi,SqrtRho}]

Iterate the dissection; to reduce the chaos, triangles with the same orientation are colored the same. Here are the dissections after 18 steps.
✕
Column[labeledrecursionDiagram[#, 530, 18]&/@{SqrtTwo,SqrtPhi,SqrtPsi,SqrtChi,SqrtRho}]

The following pinwheel tiling is not nearly as chaotic. Pinwheel triangles eventually have an infinite number of orientations, but the progress to chaos is slower than the ones shown previously.
✕
recursionDiagram[pinwheel, 530, 22]

Here’s a portion of the fractal after 180 steps.
✕
pts=Drop[Append[SqrtSpace[SqrtChi[[1]], SqrtChi[[2]]], SqrtSpace[SqrtChi[[1]], {{7,13,11,9},{9,15,13,11}}/4]],{3}];
tri={{2,9,1},{4,6,3},{5,6,8},{7,6,4},{3,5,6},{8,7,6},{1,3,5},{2,8,7}};
edges=Union[Flatten[Subsets[#,{2}]&/@tri,1]];
bary=ReptileSubstitutionBarycentrics[SqrtChi];
start=GatherBy[Chop[N[pts[[#]]]]&/@tri,Area[Polygon[#]]&];
iterate=SortBy[GatherBy[RecursiveBarycentrics[180,bary, start],Round[10^6Normalize[#[[1]]Mean[#]]]&],Length[#]&];fractchi = Graphics[{EdgeForm[Black], MapIndexed[
{RGBColor[N[(IntegerDigits[#2[[1]],4,3])/3.1]], Polygon[#]&/@#1}&,iterate]}, ImageSize> 530]

And here is a portion of the plastic fractal after 40 steps.
✕
take=SqrtRho;
pts=ReplacePart[SqrtSpace[take[[1]], take[[2]]],3> SqrtSpace[take[[1]], {{4,1,3},{4,7,7}}/4]];;
tri=ReplacePart[take[[3]],1> {2,3,1}];
bary=ReptileSubstitutionBarycentrics[take];
start=GatherBy[Chop[N[pts[[#]]]]&/@tri,Area[Polygon[#]]&];
iterate=SortBy[GatherBy[RecursiveBarycentrics[40,bary, start],Round[10^6Normalize[#[[1]]Mean[#]]]&],Length[#]&];Graphics[{EdgeForm[Black], MapIndexed[
{RGBColor[N[(IntegerDigits[#2[[1]],4,3])/3.1]], Polygon[#]&/@#1}&,iterate]}, ImageSize> 530]

By using symmetry within the dissections, it turns out there are twelve substitution tilings with distinct properties.
✕
Grid[Partition[dissectionDiagram[#, 265]&/@
{SqrtTwo,SqrtTwo2,SqrtPhi,SqrtPsi,SqrtChi,SqrtChi2,SqrtChi3,SqrtChi4,SqrtRho,SqrtRho2,SqrtRho3,SqrtRho4},UpTo[2]]]

It turns out “neat example” is really true here, leading to twelve new substitution tilings.
Download this post as a Wolfram Notebook.
Mathematica 12 significantly extends the reach of Mathematica and introduces many innovations that give all Mathematica users new levels of power and effectiveness.

Buy now!

 ↑ 
10. Computing in 128 Characters: Winners of the 2018 Wolfram Employees OneLiner CompetitionÂò., 26 ôåâð.[−] Every year at the Wolfram Technology Conference, attendees take part in the OneLiner Competition, a contest to see who can do the most astounding things with 128 characters of Wolfram Language code. Wolfram employees are not allowed to compete out of fairness to our conference visitors, but nevertheless every year I get submissions and requests to submit from my colleagues that I have to reject. To provide an outlet for their eagerness to show how cool the software is that they develop, this year we organized the first internal OneLiner Competition.
We awarded first, second and thirdplace prizes as well as six honorable mentions and one dishonorable mention. And the winners are
Honorable Mention
Danny Finn, Consultant
ImageGuessr (Wolfram Pictionary) (128 characters)
Danny’s submission is a complete game in 128 characters. Some of the judges found it so compelling that they went on playing after the judging session ended.
The code picks a random word and assembles a collage of images found on the web by searching for that word. Then it puts up a dialog with the collage and an input field for the player to guess what the word is. When the player enters a word, it correlates the semantic features of the guess with the semantic features of the word. The higher the correlation, the closer the guess in meaning to the original word. That’s a lot of functionality in one tweet of code!
#10005
{w=RandomWord[],g=ToString@Input@ImageCollage@WebImageSearch[w,"Images"],Dot@@@FeatureExtract[{{w,g}},"WordVectors"][[;;,;;,1]]}
Honorable Mention
Danny Finn, Consultant
Notebook Pox (123 characters)
Danny earned a second honorable mention for code that gives your notebook a case of the pox. He probably would have earned a dishonorable mention had he not also provided the cure (see the second input in this section).
Danny could have saved seven characters by eliminating the unnecessary System` that precedes BackgroundAppearance, probably a leftover from some sort of experimentation.
#10005
SetOptions[EvaluationNotebook[],System`BackgroundAppearance>Rasterize@Graphics[{Red,Disk[#,0.01] /@RandomReal[1,{99,2}]}]]
#10005
SetOptions[EvaluationNotebook[],System`BackgroundAppearance>None]
Honorable Mention
Sarah Stanley, Principal Consultant
Rainforest Winter (126 characters)
Sarah’s submission combines image search and an imagetransforming neural network in a novel way to show what the rainforest would look like if it snowed. The ListAnimate output shows a selection of winterized rainforest images.
#10005
ResourceObject[a="CycleGAN SummertoWinter Translation"];ListAnimate[ NetModel[a][#] /@WebImageSearch["rainforest","Images"]]
Honorable Mention
Sarah Stanley, Principal Consultant
Changing Tigers Stripes (128 characters)
Like Danny, Sarah also earned a second honorable mention, for an image search and neural network combination that removes tigers’ stripes. The ResourceObject that the code retrieves is the CycleGAN ZebratoHorse Translation Trained on ImageNet Competition Data neural network, a name that would have chewed up 72 of her 128 characters had her code not instead used the more compact numeric identifier. While the original network was trained to convert zebras to horses, Sarah applied it to a new domain, white tigers, to interesting effect.
#10005
ResourceObject[a="4b14804010cd43e281521a31c675cec3"];{#, NetModel[a]@#} /@WebImageSearch["white tiger","Images"]//TableForm
Honorable Mention
Brian Wood, Lead Technical Marketing Writer
A Little Fun with Motion (117 characters)
Brian’s submission does video effects on the fly with a compact piece of imageprocessing code that creates color trails as an object moves. When an object is stationary, the superimposed color trails sum to faithfully recreate the original image.
#10005
Manipulate[With[{c:=CurrentImage[],p:=Pause[t]},ImageAdd[(p;ImageMultiply[c,#]) /@{Red,Green,Blue}]],{t,.05,.15,.01}]
Honorable Mention
Daniel Carvalho, International Business Development Executive
Wave (93 characters)
After knotting their brains trying to understand some of the more complex submissions, the judges found Daniel’s meditative, gently rolling wave a soothing balm.
#10005
Animate[Plot3D[Sin[f+x]Cos[f+y],{x,0,2Pi},{y,0,2Pi},ColorFunction>"DeepSeaColors"],{f,0,Pi}]
Dishonorable Mention
Jon McLoone, Director, Technical Communication and Strategy
Surprisingly Short Minesweeper Code (47 characters?)
Jon’s Minesweeper submission was a first: an entry that hacked the submission notebook to subvert its charactercounting code. It serves as a brilliant example of why you get that annoying Enable Dynamics button when you open a Wolfram Notebook that contains dynamic code:
When you open Jon’s submission, you see 2,000some characters of code for a functional Minesweeper game that begins like this:
#10005
DynamicModule[{$GameTime = 0, $Time, data = {{}}, display = {},
neighbours, $GameState = "Start", $GameData, h = 9, w = 9, n = 10,
bombchar =
In spite of the huge submission, the character counter at the top shows that his submission is just 47 characters long:
A note that accompanied Jon’s submission reads, “Surprisingly short Minesweeper code. It may look longer but scores only 47 characters. Go on check! And, I promise, I haven’t changed the submission template, you can copy the code into a fresh OneLiner template and see.”
So how did he do it? He indeed hadn’t changed the source code embedded in the submission notebook, but he did redefine some of the functions that that code defines. You can see how when you use Cell > Show Expression on the cell that contains his code.
The first "0" in the code is wrapped with a DynamicWrapperBox that gives the submission notebook’s charactercounting functions new definitions. Instead of counting the characters in the submission, the new definitions count the characters in the string “Surely deserving of a dishounourable [sic] mention!!!” (47 characters):
#10005
RowBox[{"$GameTime","=",InterpretationBox[DynamicWrapperBox["0",Quiet[Clear[$CellContext`BoxesToTweetString,$CellContext`UntweetableBoxTypes];$CellContext`UntweetableBoxTypes[BlankNullSequence[]]={};$CellContext`BoxesToTweetString[BlankNullSequence[]]:="Surely deserving of a dishounourable mention!!!";
Protect[$CellContext`UntweetableBoxTypes,$CellContext`BoxesToTweetString]] ],0]}]
The first time Jon’s submission is scrolled onscreen, the DynamicWrapperBox code activates and hacks the submission notebook. Indeed deserving of a dishonorable mention, Jon!
Third Place
Jofre EspigulePons, Consultant
Endangered Species (122 characters)
The best submissions combine Wolfram Language components in ways that produce beautiful, useful and surprising results. Jofre’s submission meets all three criteria. It finds the intersection of the class of mammals with the class of endangered species (i.e. the class of endangered mammals), gets an image of each one and assembles the images into a collage—a graphic reminder of the biological wealth we are in danger of losing.
#10005
ImageCollage[#["Image"] /@(EntityClass[s="Species","Mammal"s][e="Entities"]?endangered species species specifications[e]),Method>"Columns"]
Second Place
Lou D Andria, Senior User Interface Developer
Wolfram Celebrities (123 characters)
We had a lot of fun with Lou’s submission that scrapes employee images from our company directory and uses Classify to find the notable person that they most resemble.
#10005
Labeled[#,Classify["NotablePerson",#]] /@Import["https://abcd.wolfram.com/efghi/jilmnopqr/stuvwx/images",{"HTML","Images"}]
A surprisingly large number of people in the company were identified as “Stephen Wolfram” (including Stephen himself). Hmm...
First Place
Jon McLoone, Director, Technical Communication and Strategy
Evolving Abstract Art (68 characters)
The same colleague who earned this competition’s dishonorable mention also took first place. The elegance and concision of Jon McLoone’s 68character submission won over the judges with its high ratio of graphical impact to code length. It’s both animated and graphically engaging, and keeps you watching to see how the image will evolve:
#10005
i=RandomImage[1,300,ColorSpace>"RGB"];
Dynamic[i=ImageRestyle[i,i]]
Jon took advantage of the complexitycompounding effect of repetition to create code that delivers far more than its small character count promises. Congratulations, Jon!
There were many more great entries—34 total—that you can see by downloading this notebook. To all who participated: thanks for showing us once again the power of the Wolfram Language.
Get a Mathematica service plan to be the first to receive an upgrade for the upcoming release of Version 12 of the Wolfram Language.
Buy now!
 ↑ 
Powered by
 