Loading…
Birds of a Feather [clear filter]
Wednesday, October 17
 

10:30am PDT

Lifecycle of LLVM bug reports
The goal of the BoF is to improve the (currently non-existing) definition and documentation of the lifecycle of LLVM bug tickets. Not having a documented lifecycle results in a number of issues, of which few have come up recently on the mailing list, including: -- When bugs get closed, what is the right amount of info that should be required so that the bug report is as meaningful as possible without putting unreasonable demands on the person closing the bug? -- When bugs get reported, what level of triaging, and to what timeline, should we aim for to keep bug reporters engaged? -- What should we aim to achieve during triaging?

Speakers
avatar for Kristof Beyls

Kristof Beyls

Senior Principal Engineer, Arm
compilers and related tools, profiling, security.
PR

Paul Robinson

Senior Staff Compiler Engineer, Sony Interactive Entertainment


Wednesday October 17, 2018 10:30am - 11:00am PDT
3 - BoF (Rm LL21CD)

11:30am PDT

Debug Info BoF
There have been significant improvements to LLVM's handling of debug info in optimized code over the past year. We will highlight recent improvements (many of which came from new contributors!) and outline some important challenges ahead.

To get the conversation going, we will present data showing improvements in source variable availability and identify passes that need more work. Potential topics for discussion include eliminating codegen differences in the presence of debug info, improving line table fidelity, features missing in LLVM's current debug info representation, and higher quality backtraces in the presence of tail calls and outlining.

Speakers
avatar for Adrian Prantl

Adrian Prantl

Apple
Ask me about debug information in LLVM, Clang and Swift!


Wednesday October 17, 2018 11:30am - 12:00pm PDT
3 - BoF (Rm LL21CD)
 
Thursday, October 18
 

10:00am PDT

GlobalISel Design & Development
GlobalISel is the planned successor to the SelectionDAG instruction selection framework, currently used by the AArch64 target by default at the -O0 optimization level, and with partial implementations in several other backends. It also has downstream users in the Apple GPU compiler. The long term goals for the project are to replace the block-based selection strategy of SelectionDAG with a more efficient framework that can also do function-wide analysis and optimization. The design of GlobalISel has evolved over its lifetime and continues to do so. The aim of this BoF is to bring together developers and other parties interested in the state and future progress of GlobalISel, and to discuss some issues that would benefit from community feedback. We will first give a short update on progress within the past year. Possible discussion topics include:

• The current design of GISel’s pipeline, with particular focus on how well the architecture will scale as the focus on optimisation increases. • For new backends, how is the experience in bringing up a GlobalISel based code generator? For existing backends, are there any impediments to continuing development? • Does using GlobalISel mean that double the work is necessary with more maintenance costs? How can this be mitigated? • Are there additional features that developers would like to see in the framework? What SelectionDAG annoyances should we take particular care to avoid or improve upon?

Speakers
avatar for Amara Emerson

Amara Emerson

Apple
Analysis, optimization, loops, code generation, back-end, GlobalISel.


Thursday October 18, 2018 10:00am - 10:30am PDT
3 - BoF (Rm LL21CD)

11:00am PDT

LLVM Foundation BoF
Come meet the new members of the LLVM Foundation board, get program updates and your questions answered.

Speakers
avatar for Chandler Carruth

Chandler Carruth

Software Engineer, Google
Chandler Carruth is the technical lead for Google's programming languages and software foundations. He has worked extensively on the C++ programming language and the Clang and LLVM compiler infrastructure. Previously, he worked on several pieces of Google's distributed build system... Read More →
avatar for Mike Edwards

Mike Edwards

LLVM Foundation
avatar for Hal Finkel

Hal Finkel

Argonne National Laboratory
avatar for Anton Korobeynikov

Anton Korobeynikov

Associate Professor, Center for Algorithmic Biotechnology, Saint Petersburg State University, 6 linia V.O., 11/21d, 1990034 St Petersburg, Russia
avatar for Tanya Lattner

Tanya Lattner

President, LLVM Foundation
President, LLVM Foundation
JR

John Regehr

University of Utah


Thursday October 18, 2018 11:00am - 11:30am PDT
3 - BoF (Rm LL21CD)

12:00pm PDT

Clang Static Analyzer BoF
This BoF will provide an opportunity for developers and users of the Clang Static Analyzer to discuss the present and future of the analyzer. We will start with a brief overview of analyzer features added by the community over the last year, including our Google Summer of Code projects on theorem prover integration and detection of deallocated inner C++ pointers. We will discuss possible focus areas for the next year, including laying the foundations for analysis that crosses the boundaries of translation units. We would also like to brainstorm and gather community feedback on potential dataflow-based checks, ask for community help to improve analyzer C++17 support, and discuss the challenges and opportunities of C++20 support, including contracts.

Speakers

Thursday October 18, 2018 12:00pm - 12:30pm PDT
3 - BoF (Rm LL21CD)

2:30pm PDT

Migrating to C++14, and beyond!
C++11 was a huge step forward for C++. C++14 is a much smaller step, yet still brings plenty of great features. C++17 will be equally small but nice. The LLVM community should discuss how we want to handle migration to newer versions of C++: how do we handle compiler requirements, notify developers early enough, manage ABI changes in toolchains, and do the actual migration itself. Let’s get together and hash this out!

Speakers

Thursday October 18, 2018 2:30pm - 3:00pm PDT
3 - BoF (Rm LL21CD)

3:00pm PDT

Implementing the parallel STL in libc++
LLVM 7.0 almost has complete support for C++17, but libc++ is missing a major component: the parallel algorithms. Let's meet to discuss the options for how to implement support.

Speakers

Thursday October 18, 2018 3:00pm - 3:30pm PDT
3 - BoF (Rm LL21CD)

4:30pm PDT

Ideal versus Reality: Optimal Parallelism and Offloading Support in LLVM
Ideal versus Reality: Optimal Parallelism and Offloading Support in LLVM

Explicit parallelism and offloading support is an important and growing part of LLVM’s eco-system for CPUs, GPUs, FPGAs and accelerators. LLVM's optimizer has not traditionally been involved in explicit parallelism and offloading support, and specifically, the outlining logic and lowering translation into runtime-library calls resides in Clang. While there are several reasons why the optimizer must be involved in parallelization in order to suitably handle a wide set of applications, the design an appropriate parallel IR for LLVM remains unsettled. Several groups (ANL, Intel, MIT, UIUC) have been experimenting with implementation techniques that push this transformation process into LLVM's IR-level optimization passes [1, 2, 3, 4, 5]. These efforts all aim to allow the optimizer to leverage language properties and optimize parallel constructs before they're transformed into runtime calls and outlined functions. Over the past couple of years, these groups have implemented out-of-tree extensions to LLVM IR to represent and optimize parallelism, and these designs have been influenced by community RFCs [6] and discussions on this topic. In this BoF, we will discuss the use cases we'd like to address and several of the open design questions, including:

* Is a canonical loop form necessary for parallelization and vectorization? * What is the SSA update requirements for extensive loop parallelization and transformation? * What are the required changes to, and impact on, existing LLVM optimization passes and analyses? E.g. inlining and aliasing-information propagation * How to represent and leverage language properties of parallel constructs in LLVM IR? * Where is the proper place in the pipeline to lower these constructs?

The purpose of this BoF is to bring together all parties interested in optimizing parallelism and offloading support in LLVM, as well as the experts in parts of the compiler that will need to be modified. Our goal is to discuss the gap between ideal and reality and identify the pieces that are still missing. In the best case we expect interested parties to agree on the next steps towards better parallelism support Clang and LLVM.



Speakers
avatar for Johannes Doerfert

Johannes Doerfert

Argonne National Laboratory
avatar for Hal Finkel

Hal Finkel

Argonne National Laboratory
XT

Xinmin Tian

Intel Corp.


Thursday October 18, 2018 4:30pm - 5:00pm PDT
3 - BoF (Rm LL21CD)

5:00pm PDT

Should we go beyond `#pragma omp declare simd`?
Should we go beyond `#pragma omp declare simd`?

BoF for the people involved in the development of the interface between the compiler and the vector routines provided by a library (including C99 math functions), or via user code. The discussion is ongoing in the ML [1] http://lists.llvm.org/pipermail/llvm-dev/2018-July/124520.html
Problem statement: "How should the compiler know about which vector functions are available in a library or in a module when auto-vectorizing scalar calls, when standards like OpenMP and Vector Function ABIs cannot provide a 1:1 mapping from the scalar functions to the vector one?".
Practical example of the problem:
1. library L provides vector `sin` for target T operating on four lanes, but in two versions: a _slow_ vector `sin` that guarantee high precision, and a _fast_ version with lazy requirement on precision. How should the compiler allow the user to choose between the two? 2. A user can write serial code and rely on the auto-vectorization capabilities of the compiler to generate vector functions using the `#pragma omp declare simd` directive of OpenMP. Sometimes the compiler doesn't do a good job in vectorizing such functions, because not all the micro-architectural capabilities of a vector extension can be exposed in the vectorizer pass. This situation often forces a user to write target specific vector loops that invoke a target specific implementation of the vector function, mostly via pre-processor directive that reduce the maintainability and portability of the code. How can we help clang users to avoid such situation? Could they rely on the compiler in picking up the correct version of the vector function without modifying the original code other than adding the source of the hand optimized version of the vector function?
Proposed schedule:
1. Enunciate the problem that we are trying to solve. 2. List the proposed solutions. 3. Discuss pros and cons of each of them. 4. Come up with a common plan that we can implement in clang/LLVM.

Speakers

Thursday October 18, 2018 5:00pm - 5:30pm PDT
3 - BoF (Rm LL21CD)
 
Filter sessions
Apply filters to sessions.