<ul>
<li>
Professional software engineer by day, Debian developer by night
+ (or sometimes the other way round)
</li>
<li>
Regular Linux contributor in both roles since 2008
every week or two)
<ul>
<li>
- ...though some features aren't ready to use when they firat
+ ...though some features aren't ready to use when they first
appear in a release
</li>
</ul>
</ul>
</div>
+<div class="slide">
+ <h1>Unnamed temporary files [3.11]</h1>
+ <ul>
+ <li>
+ Open directory with option <tt>O_TMPFILE</tt> to create an
+ unnamed temporary file on that filesystem
+ </li>
+ <li>
+ As with <tt>tmpfile()</tt>, the file disppears on
+ last <tt>close()</tt>
+ </li>
+ <li>
+ File can be linked into the filesystem using
+ <tt>linkat(..., AT_EMPTY_PATH)</tt>, allowing for 'atomic'
+ creation of file with complete contents and metadata
+ </li>
+ <li>
+ Not supported on all filesystem types, so you will usually need
+ a fallback
+ </li>
+ </ul>
+</div>
+
+<div class="slide">
+ <h1>Lustre filesystem [3.12]</h1>
+ <ul>
+ <li>
+ A distributed filesystem, popular for cluster computing
+ applications
+ </li>
+ <li>
+ Developed out-of-tree since 1999, but now added to Linux staging
+ directory
+ </li>
+ <li>
+ Was included in squeeze but dropped from wheezy as it didn't
+ support Linux 3.2
+ </li>
+ <li>
+ Userland is now missing from Debian
+ </li>
+ </ul>
+</div>
+
+<div class="slide">
+ <h1>Network busy-polling [3.11] (1)</h1>
+ <p>A conventional network request/response process looks like:</p>
+ <small><!-- ew -->
+ <ol class="incremental">
+ <li>
+ Task calls <tt>send()</tt>; network stack constructs a
+ packet; driver adds it to hardware Tx queue
+ </li>
+ <li>
+ Task calls <tt>poll()</tt> or <tt>recv()</tt>, which blocks;
+ kernel puts it to sleep and possibly idles the CPU
+ </li>
+ <li>
+ Network adapter receives response and generates IRQ, waking
+ up CPU
+ </li>
+ <li>
+ Driver's IRQ handler schedules polling of the hardware Rx
+ queue (NAPI)
+ </li>
+ <li>
+ Kernel runs the driver's NAPI poll function, which passes
+ the response packet into the network stack
+ </li>
+ <li>
+ Network stack decodes packet headers and adds packet to
+ the task's socket
+ </li>
+ <li>
+ Network stack wakes up sleeping task; scheduler switches
+ to it and the socket call returns
+ </li>
+ </ol>
+ </small>
+</div>
+
+<div class="slide">
+ <h1>Network busy-polling [3.11] (2)</h1>
+ <ul class="incremental">
+ <li>
+ If driver supports busy-polling, it tags each packet with
+ the receiving NAPI context, and kernel tags sockets
+ </li>
+ <li>
+ When busy-polling is enabled, <tt>poll()</tt>
+ and <tt>recv()</tt> call the driver's busy poll function to
+ check for packets synchronously (up to some time limit)
+ </li>
+ <li>
+ If the response usually arrives quickly, this reduces overall
+ request/response latency as there are no context switches and
+ power transitions
+ </li>
+ <li>
+ Time limit set by sysctl (<tt>net.busy_poll</tt>,
+ <tt>net.busy_read</tt>) or socket option (<tt>SOL_SOCKET,
+ SO_BUSY_POLL</tt>); requires tuning
+ </li>
+ </ul>
+</div>
+
<div class="slide">
<h1>Questions?</h1>
</div>