LLVM Project Blog

LLVM Project News and Details from the Trenches

Monday, September 12, 2016

Announcing the next LLVM Foundation Board of Directors

The LLVM Foundation is pleased to announce its new Board of Directors:


Chandler Carruth
Hal Finkel
Arnaud de Grandmaison
David Kipping
Anton Korobeynikov
Tanya Lattner
Chris Lattner
John Regehr


Three new members and five continuing members were elected to the eight person board. The new board consists of individuals from corporations and from the academic and scientific communities. They also represent various geographical groups of the LLVM community. All board members are dedicated and passionate about the programs of the LLVM Foundation and growing and supporting the LLVM community.


When voting on new board members, we took into consideration all contributions (past and present) and current involvement in the LLVM community. We also tried to create a balanced board of individuals from a wide range of backgrounds and locations to provide a voice to as many groups within the LLVM community.


We want to thank everyone who applied as we had many strong applications. As the programs of the LLVM Foundation grow we will be relying on volunteers to help us reach success. Please join our mailing list to be informed of volunteer opportunities.


About the board of directors (listed alphabetically by last name):


Chandler Carruth has been an active contributor to LLVM since 2007. Over the years, he has has worked on LLVM’s memory model and atomics, Clang’s C++ support, GCC-compatible driver, initial profile-aware code layout optimization pass, pass manager, IPO infrastructure, and much more. He is the current code owner of inlining and SSA formation.


In addition to his numerous technical contributions, Chandler has led Google’s LLVM efforts since 2010 and shepherded a number of new efforts that have positively and significantly impacted the LLVM project. These new efforts include things such as adding C++ modules to Clang, adding address and other sanitizers to Clang/LLVM, making Clang compatible with MSVC and available to the Windows C++ developer community, and much more.


Chandler works at Google Inc. as a technical lead for their C++ developer platform and has served on the LLVM Foundation board of directors for the last 2 years.
Hal Finkel has been an active contributor to the LLVM project since 2011. He is the code owner for the PowerPC target, alias-analysis infrastructure, loop re-roller and the basic-block vectorizer.  


In addition to his numerous technical contributions, Hal has chaired the LLVM in HPC workshop, which is held in conjunction with Super Computing (SC), for the last 3 years. This workshop provides a venue for the presentation of peer-reviewed HPC-related researching LLVM from both industry and academia. He has also been involved in organizing an LLVM-themed BoF session at SC and LLVM socials in Austin.


Hal is Lead for Compiler Technology and Programming Languages at Argonne National Laboratory’s Leadership Computing Facility.


Arnaud de Grandmaison has been hacking on LLVM projects since 2008. In addition to his open source contributions, he has worked for many years on private out-of-tree LLVM-based projects at Parrot, DiBcom, or ARM. He has also been a leader in the European LLVM community by organizing the EuroLLVM Developers’ meeting, Paris socials, and chaired or participated in numerous program committees for the LLVM Developers’ Meetings and other LLVM related conferences.


Arnaud has attended numerous LLVM Developers’ meetings and volunteered as moderator or presented as well. He also moderates several LLVM mailing lists.  Arnaud is also very involved in community wide discussions and decisions such as re-licensing and code of conduct.


Arnaud is a Principal Engineer at ARM.


David Kipping has been involved with the LLVM project since 2010. He has been a key organizer and supporter of many LLVM community events such as the US and European LLVM Developers’ Meetings. He has served on many of the program committees for these events.


David has worked hard to advance the adoption of LLVM at Qualcomm and other companies. One such example of his efforts is the LLVM track he created at the 2011 Linux Collaboration summit. He has over 30 years experience in open source and developer tools including working on C++ at Borland.


David has served on the board of directors for the last 2 years and has held the officer position of treasurer. The treasurer is a time demanding position in that he supports the day to day operation of the foundation, balancing the books, and generates monthly treasurer reports.


David is Director of Product Management at Qualcomm and has served on the LLVM Foundation board of directors for the last 2 years


Anton Korobeynikov has been an active contributor to the LLVM project since 2006. Over the years, he has numerous technical contributions to areas including Windows support, ELF features, debug info, exception handling, and backends such as ARM and x86. He was the original author of the MSP430 and original System Z backend.


In addition to his technical contributions, Anton has maintained LLVM’s participation in Google Summer of Code by managing applications, deadlines, and overall organization. He also supports the LLVM infrastructure and has been on numerous program committees for the LLVM Developers’ Meetings (both US and EuroLLVM).


Anton is currently an associate professor at the Saint Petersburg State University and has served on the LLVM Foundation board of directors for the last 2 years.


Tanya Lattner has been involved in the LLVM project for over 14 years. She began as a graduate student who wrote her master's thesis using LLVM, and continued on using and extending LLVM technologies at various jobs during her career as a compiler engineer.   


Tanya has been organizing the US LLVM Developers’ meeting since 2008 and attended every developer meeting. She was the LLVM release manager for 3 years, moderates the LLVM mailing lists, and helps administer the LLVM infrastructure servers, mailing lists, bugzilla, etc. Tanya has also been on the program committee for the US LLVM Developers’ meeting (4 years) and the EuroLLVM Developers’ Meeting (1 year).


With the support of the initial board of directors, Tanya created the LLVM Foundation, defined its charitable and education mission, and worked to get 501(c)(3) status.


Tanya is the Chief Operating Officer and has served as the President of the LLVM Foundation board for the last 2 years.


Chris Lattner is well known as the founder for the LLVM project and has a lengthy history of technical contributions to the project over the years.  He drove much of the early implementation, architecture, and design of LLVM and Clang.


Chris has attended every LLVM Developers’ meeting, and presented at the majority. He helped drive the conception and incorporation of the LLVM Foundation, and has served as Secretary of the board for the last 2 years. Chris also grants commit access to the LLVM Project, moderates mailing lists, moderates and edits the LLVM blog, and drives important non-technical discussions and policy decisions related to the LLVM project.


Chris manages the Developer Tools department at Apple Inc and has served on the LLVM Foundation board of directors for the last 2 years.



John Regehr has been involved in LLVM for a number of years. As a professor of computer science at the University of Utah, his research specializes in compiler correctness and undefined behavior. He is well known within the LLVM community for the hundreds of bug reports his group has reported to LLVM/Clang.


John was a project lead for IOC, a Clang based integer overflow checker that eventually became the basis for the integer parts of UBSan. He was also the primary developer of C-Reduce which utilizes Clang as a library and is often used as a test case reducer for compiler issues.

In addition to his technical contributions, John has served on several LLVM-related program committees. He also has a widely read blog about LLVM and other compiler-related issues (Embedded in Academia).

Monday, June 27, 2016

LLVM Weekly - #130, Jun 27th 2016

Welcome to the one hundred and thirtieth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

If you're reading this on blog.llvm.org then do note this is LAST TIME it will be cross-posted there directly. There is a great effort underway to increase the content on the LLVM blog, and unfortunately LLVM Weekly has the effect of drowning out this content. As ever, you can head to http://llvmweekly.org, subscribe to get it by email, or subscribe to the RSS feed.

The canonical home for this issue can be found here at llvmweekly.org.

Tuesday, June 21, 2016

ThinLTO: Scalable and Incremental LTO

ThinLTO was first introduced at EuroLLVM in 2015, with results shown from a prototype implementation within clang and LLVM. Since then, the design was reviewed through several RFCs, it has been implemented in LLVM (for gold and libLTO), and tuning is ongoing. Results already show good performance for a number of benchmarks, with compile time close to a non-LTO build.

This blog post covers the background, design, current status and usage information.

This post was written by Teresa Johnson, Mehdi Amini and David Li.

Monday, June 20, 2016

LLVM Weekly - #129, Jun 20th 2016

Welcome to the one hundred and twenty-ninth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

The canonical home for this issue can be found here at llvmweekly.org.

Wednesday, June 15, 2016

Using LNT to Track Performance


In the past year, LNT has grown a number of new features that makes performance tracking and understanding the root causes of performance deltas a lot easier. In this post, I’m showing how we’re using these features.

LNT contains 2 big pieces of functionality:
  1. A server,
    a. to which you can submit correctness and performance measurement data, by sending it a json-file in the correct format,
    b. that analyzes which performance changes are significant and which ones aren't,
    c. that has a webui to show results and analyses in a number of different ways.
  2. A command line tool to run tests and benchmarks, such as LLVM’s test-suite, SPEC2000 and SPEC2006 benchmarks.
This post focuses on using the server. None of the features I’ll show are LLVM-specific, or even specific to ahead-of-time code generators, so you should be able to use LNT in the same way for all your code performance tracking needs. At the end, I’ll give pointers to the documentation needed to setup an LNT server and how to construct the json file format with benchmarking and profiling data to be submitted to the server.
The features highlighted focus on tracking the performance of code, not on other aspects LNT can track and analyze.
We have 2 main uses cases in tracking performance:
  • Post-commit detection of performance regressions and improvements.
  • Pre-commit analysis of the impact of a patch on performance.
I'll focus on the post-commit detection use case.

Post-commit performance tracking

Step 1. Get an overview of the "Daily Report" page

Assuming your server runs at http://yourlntserver:8000, this page is located at http://yourlntserver:8000/db_default/v4/nts/daily_report
The page gives a summary of the significant changes it found today.
An example of the kind of view you can get on that page is the following
In the above screenshot, you can see that there were performance differences on 3 different programs, bigfib, fasta and ffbench. The improvement on ffbench only shows up on a machine named “machine3”, whereas the performance regression on the other 2 programs shows up on multiple machines.

The table shows how performance evolved over the past 7 days, one column for each day. The sparkline on the right shows graphically how performance has evolved over those days. When the program was run multiple times to get multiple sample points, these show as separate dots that are vertically aligned (because they happened on the same date). The background color in the sparkline represents a hash of the program binary. If the color is the same on multiple days, the binaries were identical on those days.

Let’s look first at the ffbench program. The background color in the sparkline is the same for the last 2 days, so the binary for this program didn’t change in those 2 days. Conclusion: the reported performance variation of -8.23% is caused by noise on the machine, not due to a change in code. The vertically spread out dots also indicate that this program has been noisy consistently over the past 7 days.

Let’s now look at the bigfib. The background color in the sparkline has changed since its previous run, so let’s investigate further. By clicking on one of the machine names in the table, we go to a chart showing the long-term evolution of the performance of this program on that machine.

Step 2. The long-term performance evolution chart

This view shows how performance has evolved for this program since we started measuring it. When you click on one of the dots, which each represent a single execution of the program, you get a pop-up with information such as revision, date at which this was run etc.
When you click on the number after “Run:” in that pop-up, it’ll bring you to the run page.

Step 3. The Run page

The run page gives an overview of a full “Run” on a given machine. Exactly what a Run contains depends a bit on how you organize the data, but typically it consists of many programs being run a few times on 1 machine, representing the quality of the code generated by a specific revision of the compiler on one machine, for one optimization level.
This run page shows a lot of information, including performance changes seen since the previous run:
When hovering with the mouse over entries, a “Profile” button will show, that when clicked, shows profiles of both the previous run and the current run.

Step 4. The Profile page

At the top, the page gives you an overview of differences of recorded performance events between the current and previous run.
After selecting which function you want to compare, this page shows you the annotated assembly:


While it’s clear that there are differences between the disassembly, it’s often much easier to understand the differences by reconstructing the control flow graph to get a per-basic-block view of differences. By clicking on the “View:” drop-down box and selecting the assembly language you see, you can get a CFG view. I find showing absolute values rather than relative values helps to understand performance differences better, so I also chose “Absolute numbers” in the drop down box on the far right:
There is obviously a single hot basic block, and there are differences in instructions in the 2 versions. The number in the red side-bar shows that the number of cycles spent in this basic block has increased from 431M to 716M. In just a few clicks, I managed to drill down to the key codegen change that caused the performance difference!

We combine the above workflow with the llvmbisect tool available at http://llvm.org/viewvc/llvm-project/zorg/trunk/llvmbisect/ to also quickly find the commit introducing the performance difference. We find that using both the above LNT workflow and the llvmbisect tool are vital to be able to act quickly on performance deltas.

Pointers on setting up your own LNT server for tracking performance

Setting up an LNT server is as simple as running the half a dozen commands documented at http://lnt.llvm.org/quickstart.html under "Installation" and "Viewing Results". The "Running tests" section is specific to LLVM tests, the rest is generic to performance tracking of general software.

The documentation for the json file format to submit results to the LNT server is here: http://lnt.llvm.org/importing_data.html.
The documentation for how to also add profile information, is at http://lnt.llvm.org/profiles.html.


Monday, June 13, 2016

LLVM Weekly - #128, June 13th 2016

Welcome to the one hundred and twenty-eighth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

The canonical home for this issue can be found here at llvmweekly.org.

Monday, June 6, 2016

LLVM Weekly - #127, June 6th 2016

Welcome to the one hundred and twenty-seventh issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org, or @llvmweekly or @asbradbury on Twitter.

The canonical home for this issue can be found here at llvmweekly.org.