7 Julia Gotchas and How to Handle Them
October 4 2016 in Julia, Programming | Tags: | Author: Christopher Rackauckas
Let me start by saying Julia is a great language. I love the language, it is what I find to be the most powerful and intuitive language that I have ever used. It’s undoubtedly my favorite language. That said, there are some “gotchas”, tricky little things you need to know about. Every language has them, and one of the first things you have to do in order to master a language is to find out what they are and how to avoid them. The point of this blog post is to help accelerate this process for you by exposing some of the most common “gotchas” offering alternative programming practices.
Julia is a good language for understanding what’s going on because there’s no magic. The Julia developers like to have clearly defined rules for how things act. This means that all behavior … READ MORE
Introducing DifferentialEquations.jl
August 1 2016 in Differential Equations, FEM, Julia, Programming, Stochastics | Tags: DifferentialEquations.jl, julia | Author: Christopher Rackauckas
Edit: This post is very old. See this post for more up-to-date information.
Differential equations are ubiquitous throughout mathematics and the sciences. In fact, I myself have studied various forms of differential equations stemming from fields including biology, chemistry, economics, and climatology. What was interesting is that, although many different people are using differential equations for many different things, pretty much everyone wants the same thing: to quickly solve differential equations in their various forms, and make some pretty plots to describe what happened.
The goal of DifferentialEquations.jl is to do exactly that: to make it easy solve differential equations with the latest and greatest algorithms, and put out a pretty plot. The core idea behind DifferentialEquations.jl is that, while it is easy to describe a differential equation, they have such diverse behavior that experts have spent over a century compiling … READ MORE
Using Julia’s Type System For Hidden Performance Gains
June 7 2016 in Julia, Programming | Tags: DifferentialEquations.jl, julia, performance | Author: Christopher Rackauckas
What I want to share today is how you can use Julia’s type system to hide performance gains in your code. What I mean is this: in many cases you may find out that the optimal way to do some calculation is not a “clean” solution. What do you do? What I want to do is show how you can define special arrays which are wrappers such that these special “speedups” are performed in the background, while having not having to keep all of that muck in your main algorithms. This is easiest to show by example.
The examples I will be building towards are useful for solving ODEs and SDEs. Indeed, these tricks have all been implemented as part of DifferentialEquations.jl and so these examples come from a real use case! They really highlight a main feature of Julia: … READ MORE
Finalizing Your Julia Package: Documentation, Testing, Coverage, and Publishing
May 16 2016 in Julia | Tags: AppVoyer, coverage, documentation, Documenter.jl, julia, testing, Travis.CI | Author: Christopher Rackauckas
In this tutorial we will go through the steps to finalizing a Julia package. At this point you have some functionality you wish to share with the world… what do you do? You want to have documentation, code testing each time you commit (on all the major OSs), a nice badge which shows how much of the code is tested, and put it into metadata so that people could install your package just by typing Pkg.add(“Pkgname”). How do you do all of this?
Note: At anytime feel free to checkout my package repository DifferentialEquations.jl which should be a working example.
Generate the Package and Get it on Github
First you will want to generate your package and get it on Github repository. Make sure you have a Github account, and then setup the environment variables in the git shell:
Optimal Number of Workers for Parallel Julia
April 16 2016 in HPC, Julia, Programming, Stochastics | Tags: BLAS, hyperthreading, julia, parallel computing, workers | Author: Christopher Rackauckas
How many workers do you choose when running a parallel job in Julia? The answer is easy right? The number of physical cores. We always default to that number. For my Core i7 4770K, that means it’s 4, not 8 since that would include the hyperthreads. On my FX8350, there are 8 cores, but only 4 floating-point units (FPUs) which do the math, so in mathematical projects, I should use 4, right? I want to demonstrate that it’s not that simple.
Where the Intuition Comes From
Most of the time when doing scientific computing you are doing parallel programming without even knowing it. This is because a lot of vectorized operations are “implicitly paralleled”, meaning that they are multi-threaded behind the scenes to make everything faster. In other languages like Python, MATLAB, and R, this is also the case. Fire up MATLAB … READ MORE
Holding off on Julia for a little bit… you should blog!
March 28 2016 in Uncategorized | Tags: | Author: Christopher Rackauckas
I think I am going to stop posting on Julia for a little bit. I looked at JuliaBloggers.com and realized my blog is hogging it to much. Since I have gone through a pretty good arc, starting at writing FEM code to multiple GPU / Xeon Phi computing, I think that means I will focus on a few other topics for a little bit. However, I will be doing a blog post on native Xeon Phi usage via Julia’s ParallelAccelerator.jl sometime soon, so be prepared for that.
In the meantime, stay tuned for topics like stochastic numerics, theoretical biology, Mathematica, and HPCs in the near future.
If you have the time, start up your own Julia blog and start contributing to JuliaBloggers!
Benchmarks of Multidimensional Stack Implementations in Julia
March 20 2016 in Julia, Programming | Tags: benchmark, data structures, julia, stack | Author: Christopher Rackauckas
Datastructures.jl claims it’s fast. How does it do? I wrote some quick codes to check it out. What I wanted to do is find out which algorithm does best for implementing a stack where each element is three integers. I tried filling a pre-allocated array, pushing into three separate vectors, and different implementations of the stack from the DataStructures.jl package.
function baseline()
stack = Array{Int64,2}(1000000,3)
for i=1:1000000,j=1:3
stack[i,j]=i
end
end
function baseline2()
stack = Array{Int64,2}(1000000,3)
for j=1:3,i=1:1000000
stack[i,j]=i
... READ MORE
MATLAB 2016a Release Summary for Scientific Computing
March 15 2016 in MATLAB, Programming | Tags: 2016a, MATLAB, optimization, parallel | Author: Christopher Rackauckas
There is a lot to read every time MATLAB releases a new version. Here is a summary of what has changed in 2016a from the eyes of someone doing HPC/Scientific Computing/Numerical Analysis. This means I will leave off a lot, and you should check it out yourself but if you’re using MATLAB for science then this may cover most of the things you care about.
- Support for sparse matrices on the GPU. A nice addition is sprand and pcg (Preconditioned Conjugate Gradient solvers) for sprase GPU matrices.
- One other big change in the parallel computing toolbox is you can now set nonlinear solvers to estimate gradients and Jacobians in parallel. This should be a nice boost to the MATLAB optimization toolbox.
- In the statistics and machine learning toolbox, they added some algorithms for high dimensional data and now let you run kmeans … READ MORE
Interfacing with a Xeon Phi via Julia
March 4 2016 in C, HPC, Julia, Programming, Stochastics, Xeon Phi | Tags: C, julia, MIC, OpenMP, parallel, Xeon Phi | Author: Christopher Rackauckas
(Disclaimer: This is not a full-Julia solution for using the Phi, and instead is a tutorial on how to link OpenMP/C code for the Xeon Phi to Julia. There may be a future update where some of these functions are specified in Julia, and Intel’s compilertools.jl looks like a viable solution, but for now it’s not possible.)
Intel’s Xeon Phi has a lot of appeal. It’s an instant cluster in your computer, right? It turns out it’s not quite that easy. For one, the installation process itself is quite tricky, and the device has stringent requirements for motherboard choices. Also, making out at over a taraflop is good, but not quite as high as NVIDIA’s GPU acceleration cards.
However, there are a few big reasons why I think our interest in the Xeon Phi should be renewed. For one, Intel … READ MORE
Multiple-GPU Parallelism on the HPC with Julia
February 28 2016 in CUDA, HPC, Julia, Programming | Tags: CUDA, gpu, HPC, julia | Author: Christopher Rackauckas
This is the exciting Part 3 to using Julia on an HPC. First I got you started with using Julia on multiple nodes. Second, I showed you how to get the code running on the GPU. That gets you pretty far. However, if you got a trial allocation on Cometand started running jobs, you may have noticed when looking at the architecture that you’re not getting to use the full GPU. In the job script I showed you, I asked for 2 GPUs. Why? Well, that’s because the flagship NVIDIA GPU, the Tesla K80, is actually a duel GPU and you have to control the two parts separately. You may have been following along on your own computer and have been wondering how you use the multiple GPUs in your setup as well. This tutorial will … READ MORE