unnamed temporary file on that filesystem
</li>
<li>
- As with <tt>tmpfile()</tt>, the file disppears on
+ As with <tt>tmpfile()</tt>, the file disappears on
last <tt>close()</tt>
</li>
<li>
</ul>
</div>
-<div class="slide">
- <h1>Lustre filesystem [3.12]</h1>
- <ul>
- <li>
- A distributed filesystem, popular for cluster computing
- applications
- </li>
- <li>
- Developed out-of-tree since 1999, but now added to Linux staging
- directory
- </li>
- <li>
- Was included in squeeze but dropped from wheezy as it didn't
- support Linux 3.2
- </li>
- <li>
- Userland is now missing from Debian
- </li>
- </ul>
-</div>
-
<div class="slide">
<h1>Network busy-polling [3.11] (1)</h1>
<p>A conventional network request/response process looks like:</p>
</ul>
</div>
+<div class="slide">
+ <h1>Lustre filesystem [3.12]</h1>
+ <ul>
+ <li>
+ A distributed filesystem, popular for cluster computing
+ applications
+ </li>
+ <li>
+ Developed out-of-tree since 1999, but now added to Linux staging
+ directory
+ </li>
+ <li>
+ Was included in squeeze but dropped from wheezy as it didn't
+ support Linux 3.2
+ </li>
+ <li>
+ Userland is now missing from Debian
+ </li>
+ </ul>
+</div>
+
<div class="slide">
<h1>Btrfs offline dedupe [3.12]</h1>
<ul class="incremental">
<li>
- Btrfs generally does COW rather than updating in-place, allowing
- snapshots and file copies to defer the actual copying and save
- space
+ Btrfs generally copies and frees blocks, rather than updating
+ in-place
+ </li>
+ <li>
+ This allows snapshots and file copies to copy-by-reference,
+ deferring the real copying until changes are made
</li>
<li>
Filesystems may still end up with multiple copies of the same
and ebtables
</li>
<li>
- All require a specific kernel module for each type of match
- and each possible action
+ All limited to single protocol, and need a kernel module for
+ each match type and each action
</li>
<li>
- Userland could only use the four protocol-specific APIs,
- although the internal netfilter API is more flexible
+ Kernel's internal netfilter API is more flexible
</li>
<li>
nftables exposes more of this flexibility, allowing userland
nftables userland tool uses this API and is already packaged
</li>
<li>
- Eventually, the old APIs will be removed and the old userland
+ Eventually, old APIs will be removed and old userland
tools must be ported to use nftables
</li>
</ul>
</ul>
</div>
+<div class="slide">
+ <h1>arm64 and ppc64el ports</h1>
+ <ul class="incremental">
+ <li>
+ 'arm64' architecture was added in Linux 3.7, but was not yet
+ usable, and no real hardware was available at the time
+ </li>
+ <li>
+ Upstream Linux arm64 kernel, and Debian packages, should now run
+ on emulators and real hardware
+ </li>
+ <li>
+ 'powerpc' architecture has been available for many years,
+ but didn't support kernel running little-endian
+ </li>
+ <li>
+ Linux 3.13 added little-endian kernel support, along with new
+ userland ELF ABI variant - we call it ppc64el
+ </li>
+ <li>
+ Both ports now being bootstrapped in unstable and are candidates
+ for jessie release
+ </li>
+ </ul>
+</div>
+
+<div class="slide">
+ <h1>File-private locking [3.15]</h1>
+ <ul class="incremental">
+ <li>
+ POSIX says that closing a file descriptor removes
+ the <em>process</em>'s locks on that file
+ </li>
+ <li>
+ What if process has multiple file descriptors for the same
+ file? It loses all locks obtained through any descriptor!
+ </li>
+ <li>
+ Multithreaded processes may require serialisation around
+ file open/close to ensure they open each file exactly once
+ </li>
+ <li>
+ Hard and symbolic links can hide that two files are really the
+ same
+ </li>
+ <li>
+ Linux now provides file-private locks, associated with a
+ specific open file and removed when last descriptor for the
+ open file is closed
+ </li>
+ </ul>
+</div>
+
+<div class="slide">
+ <h1>Multiqueue block devices [3.16]</h1>
+ <ul class="incremental">
+ <li>
+ Each block device has a command queue (possibly shared with
+ other devices)
+ </li>
+ <li>
+ Queue may be partly implemented by hardware (NCQ) or only
+ in software
+ </li>
+ <li>
+ A single queue means initiation is serialised and completion
+ involves IPI - can be bottleneck for fast devices
+ </li>
+ <li>
+ High-end SSDs support multiple queues, but kernel needed changes
+ to use them
+ </li>
+ <li>
+ <tt>nvme</tt> and <tt>mtip32xx</tt> drivers now support
+ multiqueue, but SCSI drivers don't yet - may be backport-able?
+ </li>
+ </ul>
+</div>
+
<div class="slide">
<h1>Questions?</h1>
</div>