15-02-2025

22:34

Real Python: Python Keywords: An Introduction [Planet Python]

Python keywords are reserved words with specific functions and restrictions in the language. Currently, Python has thirty-five keywords and four soft keywords. These keywords are always available in Python, which means you don’t need to import them. Understanding how to use them correctly is fundamental for building Python programs.

By the end of this tutorial, you’ll understand that:

  • There are 35 keywords and four soft keywords in Python.
  • You can get a list of all keywords using keyword.kwlist from the keyword module.
  • Soft keywords in Python act as keywords only in specific contexts.
  • print and exec are keywords that have been deprecated and turned into functions in Python 3.

In this article, you’ll find a basic introduction to all Python keywords and soft keywords along with other resources that will be helpful for learning more about each keyword.

Get Your Cheat Sheet: Click here to download a free cheat sheet that summarizes the main keywords in Python.

Take the Quiz: Test your knowledge with our interactive “Python Keywords: An Introduction” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

Python Keywords: An Introduction

In this quiz, you'll test your understanding of Python keywords and soft keywords. These reserved words have specific functions and restrictions in Python, and understanding how to use them correctly is fundamental for building Python programs.

Python Keywords

Python keywords are special reserved words that have specific meanings and purposes and can’t be used for anything but those specific purposes. These keywords are always available—you’ll never have to import them into your code.

Python keywords are different from Python’s built-in functions and types. The built-in functions and types are also always available, but they aren’t as restrictive as the keywords in their usage.

An example of something you can’t do with Python keywords is assign something to them. If you try, then you’ll get a SyntaxError. You won’t get a SyntaxError if you try to assign something to a built-in function or type, but it still isn’t a good idea. For a more in-depth explanation of ways keywords can be misused, check out Invalid Syntax in Python: Common Reasons for SyntaxError.

There are thirty-five keywords in Python. Here’s a list of them, each linked to its relevant section in this tutorial:

Two keywords have additional uses beyond their initial use cases. The else keyword is also used with loops and with try and except in addition to in conditional statements. The as keyword is most commonly used in import statements, but also used with the with keyword.

The list of Python keywords and soft keywords has changed over time. For example, the await and async keywords weren’t added until Python 3.7. Also, both print and exec were keywords in Python 2.7 but were turned into built-in functions in Python 3 and no longer appear in the keywords list.

Python Soft Keywords

As mentioned above, you’ll get an error if you try to assign something to a Python keyword. Soft keywords, on the other hand, aren’t that strict. They syntactically act as keywords only in certain conditions.

This new capability was made possible thanks to the introduction of the PEG parser in Python 3.9, which changed how the interpreter reads the source code.

Leveraging the PEG parser allowed for the introduction of structural pattern matching in Python. In order to use intuitive syntax, the authors picked match, case, and _ for the pattern matching statements. Notably, match and case are widely used for this purpose in many other programming languages.

To prevent conflicts with existing Python code that already used match, case, and _ as variable or function names, Python developers decided to introduce the concept of soft keywords.

Currently, there are four soft keywords in Python:

You can use the links above to jump to the soft keywords you’d like to read about, or you can continue reading for a guided tour.

Value Keywords: True, False, None

There are three Python keywords that are used as values. These values are singleton values that can be used over and over again and always reference the exact same object. You’ll most likely see and use these values a lot.

There are a few terms used in the sections below that may be new to you. They’re defined here, and you should be aware of their meaning before proceeding:

Read the full article at https://realpython.com/python-keywords/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

11:23

Lead Asahi Linux Developer Quits Days After Leaving Kernel Maintainer Role [Slashdot: Linux]

Hector Martin has resigned as the project lead of Asahi Linux, weeks after stepping down from his role as a Linux kernel maintainer for Apple ARM support. His departure from Asahi follows a contentious exchange with Linus Torvalds over development processes and social media advocacy. After quitting kernel maintenance earlier this month, the conflict escalated when Martin suggested that "shaming on social media" might be necessary to effect change. Torvalds sharply rejected this approach, stating that "social media brigading just makes me not want to have anything at all to do with your approach" and suggested that Martin himself might be the problem. In his final resignation announcement from Asahi, Martin wrote: "I no longer have any faith left in the kernel development process or community management approach." The dispute reflects deeper tensions in the Linux kernel community, particularly around the integration of Rust code. It follows the August departure of another key Rust for Linux maintainer, Wedson Almeida Filho from Microsoft. According to Sonatype's research, more than 300,000 open source projects have slowed or halted updates since 2020.

Read more of this story at Slashdot.

Is It Time For a Change In GNOME Leadership? [Slashdot: Linux]

Longtime Slashdot reader BrendaEM writes: Command-line aside, Cinnamon is the most effective keeper of the Linux desktop flame -- by not abandoning desktop and laptop computers. Yes, there are other desktop GUIs, such as MATE, and the lightweight Xfce, which are valuable options when low overhead is important, such as in LinuxCNC. However, among the general public lies a great expanse of office workers who need a full-featured Linux desktop. The programmers who work on GNOME and its family of supporting applications enrich many other desktops do their more than their share. These faithful developers deserve better user-interface leadership. GNOME has tried to steer itself into tablet waters, which is admirable, but GNOME 3.x diminished the desktop experience for both laptop and desktop users. For instance, the moment you design what should be a graphical user interface with words such as "Activities," you ask people to change horses midstream. That is not to say that the command line and GUI cannot coexist -- because they can, as they do in many CAD programs. I remember a time when GNOME ruled the Linux desktop -- and I can remember when GNOME left those users behind. Perhaps in a future, GNOME could return to the Linux desktop and join forces with Cinnamon -- so that we may once again have the year of the Linux desktop.

Read more of this story at Slashdot.

LibreOffice 24.8.1, the first minor release of the recently announced LibreOffice 24.8 family, is available for download [Press Releases Archives - The Document Foundation Blog]

The LibreOffice 24.8 family is optimised for the privacy-conscious office suite user who wants full control over the information they share

Berlin, 12 September 2024 – LibreOffice 24.8.1, the first minor release of the LibreOffice 24.8 family of the free, volunteer-supported office suite for Windows (Intel, AMD and ARM), macOS (Apple and Intel) and Linux, is available at www.libreoffice.org/download. For users who don’t need the latest features and prefer a more tested version, TDF maintains the previous LibreOffice 24.2 family, with several months of back-ported fixes. The current version is LibreOffice 24.2.6.

LibreOffice is the only software for creating documents that contain personal or confidential information that respects the privacy of the user – ensuring that the user is able to decide if and with whom to share the content they create. As such, LibreOffice is the best option for the privacy-conscious office suite user, and offers a feature set comparable to the leading product on the market.

In addition, LibreOffice offers a range of interface options to suit different user habits, from traditional to modern, and makes the most of different screen sizes by optimising the space available on the desktop to put the maximum number of features just a click or two away.

The biggest advantage over competing products is the LibreOffice Technology Engine, the single software platform on which desktop, mobile and cloud versions of LibreOffice – including those from ecosystem companies – are based. This allows LibreOffice to provide a better user experience and to produce identical and fully interoperable documents based on the two available ISO standards: the Open Document Format (ODT, ODS and ODP) and the proprietary Microsoft OOXML (DOCX, XLSX and PPTX). The latter hides a great deal of artificial complexity, which can cause problems for users who are confident that they are using a true open standard.

End users looking for support will be helped by the immediate availability of the LibreOffice 24.8 Getting Started Guide, which can be downloaded from the following link: books.libreoffice.org. In addition, they will be able to get first-level technical support from volunteers on the user mailing lists and the Ask LibreOffice website: ask.libreoffice.org.

A short video highlighting the main new features is available on YouTube and PeerTube peertube.opencloud.lu/w/ibmZUeRgnx9bPXQeYUyXTV.

Please confirm that you want to play a YouTube video. By accepting, you will be accessing content from YouTube, a service provided by an external third party.

YouTube privacy policy

If you accept this notice, your choice will be saved and the page will refresh.

LibreOffice for Enterprise

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: www.libreoffice.org/download/libreoffice-in-business/.

Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and improves the LibreOffice technology platform. Products based on LibreOffice Technology are available for all major desktop operating systems (Windows, macOS, Linux and ChromeOS), mobile platforms (Android and iOS) and the cloud.

The Document Foundation has developed a migration protocol to help companies move from proprietary office suites to LibreOffice, based on the provision of an LTS (long-term support) enterprise-optimised version of LibreOffice, plus migration consulting and training provided by certified professionals who offer value-added solutions that are consistent with proprietary offerings. Reference: www.libreoffice.org/get-help/professional-support/.

In fact, LibreOffice’s mature code base, rich feature set, strong support for open standards, excellent compatibility and LTS options from certified partners make it the ideal solution for organisations looking to regain control of their data and break free from vendor lock-in.

LibreOffice 24.8.1 availability

LibreOffice 24.8.1 is available from www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 (no longer supported by Microsoft) and Apple macOS 10.15. Products based on LibreOffice technology for Android and iOS are listed at www.libreoffice.org/download/android-and-ios/.

LibreOffice users, free software advocates and community members can support The Document Foundation by making a donation at www.libreoffice.org/donate.

Bugs fixed: RC1 and RC2

Ubuntu’s Icon Theme Fixing Its Not-So-Obvious ‘Bug’ [OMG! Ubuntu!]

Ever looked at Ubuntu’s default icon theme Yaru and found yourself thinking: “Eh, some of those icons look too big”? —No, can’t say I had either! But it turns out some of the icons are indeed oversized. The Yaru icon theme in Ubuntu uses 4 different shapes for its app, folder and mimetype (file) icons, with a shape picked based on what works best for the design motif being used. Those shapes are: Of those, the most common icon shape used in Yaru is ‘square’ (with rounded corners, but don’t call it a squircle cos that’s so 2014, y’all). It’s […]

You're reading Ubuntu’s Icon Theme Fixing Its Not-So-Obvious ‘Bug’, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Ubuntu 24.04.2 Delayed, Won’t Be Released This Week [OMG! Ubuntu!]

If you were expecting Ubuntu 24.04.2 LTS to drop tomorrow, I come bearing some bad news: the release has been delayed by a week. Canonical’s Utkarsh Gupta reports that an ‘unfortunate incident’ resulting in some of the newly spun Ubuntu 24.04.2 images (for flavours) being built without the new HWE kernel on board (which is Linux 6.11, for those unaware). Now, including a new kernel version on the ISO is kind of the whole point of the second Ubuntu point release. It has to be there so that the latest long-term support release can boot on and support the latest […]

You're reading Ubuntu 24.04.2 Delayed, Won’t Be Released This Week, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

GNOME’s Website Just Got a Major Redesign [OMG! Ubuntu!]

GNOME rolled out a huge revamp to its official website today, and I have to say: it’s a solid improvement over the old one. The official GNOME website has an important role, serving as both showcase and springboard for those looking to learn more about the desktop environment, the app ecosystem, developer documentation, or how to get involved and support the project. Arranging, presenting, and meeting all of those needs on a single landing page—and doing it in an engaging, encouraging way? Difficult to pull off—but GNOME has. The new design looks flashy and modern. It’s more spacious and vibrant, […]

You're reading GNOME’s Website Just Got a Major Redesign, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Clapper Media Player Adds New Features, Official Windows Build [OMG! Ubuntu!]

A new version of the slick Clapper media player is out with several neat improvements Not newly new, I should say. I hadn’t run a flatpak update in Ubuntu I an age so I only jus noticed an update pending for this nifty little media player. But I figured I’d write about it since it’s been around 10 months since its last major release (save a bug fix release last summer). So what’s new? Well, Clapper 0.8.0 intros a new libpeas-based plugin system in its underlying Clapper library (which other apps can make use of to playback media, as Mastodon client […]

You're reading Clapper Media Player Adds New Features, Official Windows Build, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

KDE Plasma 6.3 Released, This is What’s New [OMG! Ubuntu!]

A new version of the KDE Plasma desktop environment is out and, as you’d expect, the update is packed with new features, UI tweaks, and performance boosts. KDE Plasma 6.3 is the fourth major update in the KDE Plasma 6.x series and it also marks the one-year anniversary of the KDE Plasma 6.0 debut – something KDE notes in its announcement: One year on, with the teething problems a major new release inevitably brings firmly behind us, Plasma’s developers have worked on fine-tuning, squashing bugs and adding features to Plasma 6 — turning it into the best desktop environment for […]

You're reading KDE Plasma 6.3 Released, This is What’s New, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Ghostty Terminal Now Supports Server-Side Decorations on Linux [OMG! Ubuntu!]

A new version of Ghostty emerged this week and in this post I run-through the key changes. For those unfamiliar with it, Ghostty is an open-source terminal emulator written in Zig. It offers a “fast, feature-rich, and native” experience — doesn’t claim to be faster, more featured, or go deeper than other native terminals, just offer a competitive combo of the three. Given it does pretty much everything other terminal emulators do, fans faithful to more established terminal emulators won’t find Ghostty‘s presence spooks ’em into switching. It’s a passion project there to be used (or not) depending on need, taste, […]

You're reading Ghostty Terminal Now Supports Server-Side Decorations on Linux, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Best Free and Open Source Alternatives to Apple AirDrop [Linux Today]

AirDrop is a proprietary wireless ad hoc service. The service transfers files among supported Macintosh computers and iOS devices by means of close-range wireless communication. AirDrop is not available for Linux. We recommend the best free and open source alternatives.

The post Best Free and Open Source Alternatives to Apple AirDrop appeared first on Linux Today.

Beelzebub: Open-source honeypot framework [Linux Today]

Beelzebub is an open-source honeypot framework engineered to create a secure environment for detecting and analyzing cyber threats. It features a low-code design for seamless deployment and leverages AI to emulate the behavior of a high-interaction honeypot.

The post Beelzebub: Open-source honeypot framework appeared first on Linux Today.

How to Install Tiny Tiny RSS Using Docker on PC (Ultimate Guide) [Linux Today]

This article will show you how to install Tiny Tiny RSS on Linux using Docker and then how to add a new RSS feed, add plugins, themes, and more.

The post How to Install Tiny Tiny RSS Using Docker on PC (Ultimate Guide) appeared first on Linux Today.

How to Install Speedtest Tracker to Monitor Your Internet Speed [Linux Today]

Learn how to install Speedtest Tracker with Docker and monitor your internet speed with real-time results.

The post How to Install Speedtest Tracker to Monitor Your Internet Speed appeared first on Linux Today.

Zellij: A Modern Terminal Multiplexer for Linux [Linux Today]

In the world of Linux, terminal multiplexers are essential tools for developers, system administrators, and power users, as they allow you to manage multiple terminal sessions within a single window, making your workflow more efficient and organized.

One of the newest and most exciting terminal multiplexers available today is Zellij, which is an open-source terminal multiplexer designed to simplify and enhance the way you work in the command line.

Unlike traditional multiplexers like tmux or screen, Zellij offers a unique layout system, keybindings that are easy to learn, and a plugin system that allows for customization.

You can find the official repository for Zellij on GitHub, which is actively maintained by a community of developers who are passionate about improving the terminal experience.

The post Zellij: A Modern Terminal Multiplexer for Linux appeared first on Linux Today.

Chezmoi: Manage Your Dotfiles Across Multiple Linux Systems [Linux Today]

Chezmoi is an incredible CLI tool that makes it easier to manage your system and software configuration dotfiles across multiple systems.

The post Chezmoi: Manage Your Dotfiles Across Multiple Linux Systems appeared first on Linux Today.

How to Change Java Version on Ubuntu (CLI and GUI) [Linux Today]

Discover a step-by-step guide to change the default version of Java using the CLI and GUI methods on the Ubuntu system.

The post How to Change Java Version on Ubuntu (CLI and GUI) appeared first on Linux Today.

Microsoft’s WSL May Soon Embrace Arch Linux [Linux Today]

Arch may soon become an officially offered distro on Microsoft’s Windows Subsystem for Linux, expanding its reach to Windows users.

The post Microsoft’s WSL May Soon Embrace Arch Linux appeared first on Linux Today.

15 Best Free and Open Source Console Email Clients [Linux Today]

To provide an insight into the quality of software that is available, we have compiled a list of 15 console email clients. Hopefully, there will be something of interest for anyone who wants to efficiently manage their mailbox from the terminal.

The post 15 Best Free and Open Source Console Email Clients appeared first on Linux Today.

You Can Now Install Ubuntu on WSL Using the New Tar-Based Format [Linux Today]

Starting from WSL version 2.4.8, we can install Ubuntu on WSL from a tar file, without using the Microsoft Store on Windows.

The post You Can Now Install Ubuntu on WSL Using the New Tar-Based Format appeared first on Linux Today.

Django Weblog: DjangoCongress JP 2025 Announcement and Live Streaming! [Planet Python]

DjangoCongress JP 2025, to be held on Saturday, February 22, 2025 at 10 am (Japan Standard Time), will be broadcast live!

It will be streamed on the following YouTube Live channels:

This year there will be talks not only about Django, but also about FastAPI and other asynchronous web topics. There will also be talks on Django core development, Django Software Foundation (DSF) governance, and other topics from around the world. Simultaneous translation will be provided in both English and Japanese.

Schedule

ROOM1
  • DRFを少しずつオニオンアーキテクチャに寄せていく
  • The Async Django ORM: Where Is it?
  • FastAPIの現場から
  • Speed at Scale for Django Web Applications
  • Django NinjaによるAPI開発の効率化とリプレースの実践
  • Implementing Agentic AI Solutions in Django from scratch
  • Diving into DSF governance: past, present and future
ROOM2
  • 生成AIでDjangoアプリが作れるのかどうか(FastAPIでもやってみよう)
  • DXにおけるDjangoの部分的利用
  • できる!Djangoテスト(2025)
  • Djangoにおける複数ユーザー種別認証の設計アプローチ
  • Getting Knowledge from Django Hits: Using Grafana and Prometheus
  • Culture Eats Strategy for Breakfast: Why Psychological Safety Matters in Open Source
  • µDjango. The next step in the evolution of asynchronous microservices technology.

A public viewing of the event will also be held in Tokyo. A reception will also be held, so please check the following connpass page if you plan to attend.

Registration (connpass page): DjangoCongress JP 2025パブリックビューイング

Eli Bendersky: Decorator JITs - Python as a DSL [Planet Python]

Spend enough time looking at Python programs and packages for machine learning, and you'll notice that the "JIT decorator" pattern is pretty popular. For example, this JAX snippet:

import jax.numpy as jnp
import jax

@jax.jit
def add(a, b):
  return jnp.add(a, b)

# Use "add" as a regular Python function
... = add(...)

Or the Triton language for writing GPU kernels directly in Python:

import triton
import triton.language as tl

@triton.jit
def add_kernel(x_ptr,
               y_ptr,
               output_ptr,
               n_elements,
               BLOCK_SIZE: tl.constexpr):
    pid = tl.program_id(axis=0)
    block_start = pid * BLOCK_SIZE
    offsets = block_start + tl.arange(0, BLOCK_SIZE)
    mask = offsets < n_elements
    x = tl.load(x_ptr + offsets, mask=mask)
    y = tl.load(y_ptr + offsets, mask=mask)
    output = x + y
    tl.store(output_ptr + offsets, output, mask=mask)

In both cases, the function decorated with jit doesn't get executed by the Python interpreter in the normal sense. Instead, the code inside is more like a DSL (Domain Specific Language) processed by a special purpose compiler built into the library (JAX or Triton). Another way to think about it is that Python is used as a meta language to describe computations.

In this post I will describe some implementation strategies used by libraries to make this possible.

Preface - where we're going

The goal is to explain how different kinds of jit decorators work by using a simplified, educational example that implements several approaches from scratch. All the approaches featured in this post will be using this flow:

Flow of Python source --> Expr IR --> LLVM IR --> Execution Expr IR --> LLVM IR --> Execution" /> Expr IR --> LLVM IR --> Execution" class="align-center" src="https://eli.thegreenplace.net/images/2025/decjit-python.png" />

These are the steps that happen when a Python function wrapped with our educational jit decorator is called:

  1. The function is translated to an "expression IR" - Expr.
  2. This expression IR is converted to LLVM IR.
  3. Finally, the LLVM IR is JIT-executed.

Steps (2) and (3) use llvmlite; I've written about llvmlite before, see this post and also the pykaleidoscope project. For an introduction to JIT compilation, be sure to read this and maybe also the series of posts starting here.

First, let's look at the Expr IR. Here we'll make a big simplification - only supporting functions that define a single expression, e.g.:

def expr2(a, b, c, d):
    return (a + d) * (10 - c) + b + d / c

Naturally, this can be easily generalized - after all, LLVM IR can be used to express fully general computations.

Here are the Expr data structures:

class Expr:
    pass

@dataclass
class ConstantExpr(Expr):
    value: float

@dataclass
class VarExpr(Expr):
    name: str
    arg_idx: int

class Op(Enum):
    ADD = "+"
    SUB = "-"
    MUL = "*"
    DIV = "/"

@dataclass
class BinOpExpr(Expr):
    left: Expr
    right: Expr
    op: Op

To convert an Expr into LLVM IR and JIT-execute it, we'll use this function:

def llvm_jit_evaluate(expr: Expr, *args: float) -> float:
    """Use LLVM JIT to evaluate the given expression with *args.

    expr is an instance of Expr. *args are the arguments to the expression, each
    a float. The arguments must match the arguments the expression expects.

    Returns the result of evaluating the expression.
    """
    llvm.initialize()
    llvm.initialize_native_target()
    llvm.initialize_native_asmprinter()
    llvm.initialize_native_asmparser()

    cg = _LLVMCodeGenerator()
    modref = llvm.parse_assembly(str(cg.codegen(expr, len(args))))

    target = llvm.Target.from_default_triple()
    target_machine = target.create_target_machine()
    with llvm.create_mcjit_compiler(modref, target_machine) as ee:
        ee.finalize_object()
        cfptr = ee.get_function_address("func")
        cfunc = CFUNCTYPE(c_double, *([c_double] * len(args)))(cfptr)
        return cfunc(*args)

It uses the _LLVMCodeGenerator class to actually generate LLVM IR from Expr. This process is straightforward and covered extensively in the resources I linked to earlier; take a look at the full code here.

My goal with this architecture is to make things simple, but not too simple. On one hand - there are several simplifications: only single expressions are supported, very limited set of operators, etc. It's very easy to extend this! On the other hand, we could have just trivially evaluated the Expr without resorting to LLVM IR; I do want to show a more complete compilation pipeline, though, to demonstrate that an arbitrary amount of complexity can be hidden behind these simple interfaces.

With these building blocks in hand, we can review the strategies used by jit decorators to convert Python functions into Exprs.

AST-based JIT

Python comes with powerful code reflection and introspection capabilities out of the box. Here's the astjit decorator:

def astjit(func):
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        if kwargs:
            raise ASTJITError("Keyword arguments are not supported")
        source = inspect.getsource(func)
        tree = ast.parse(source)

        emitter = _ExprCodeEmitter()
        emitter.visit(tree)
        return llvm_jit_evaluate(emitter.return_expr, *args)

    return wrapper

This is a standard Python decorator. It takes a function and returns another function that will be used in its place (functools.wraps ensures that function attributes like the name and docstring of the wrapper match the wrapped function).

Here's how it's used:

from astjit import astjit

@astjit
def some_expr(a, b, c):
    return b / (a + 2) - c * (b - a)

print(some_expr(2, 16, 3))

After astjit is applied to some_expr, what some_expr holds is the wrapper. When some_expr(2, 16, 3) is called, the wrapper is invoked with *args = [2, 16, 3].

The wrapper obtains the AST of the wrapped function, and then uses _ExprCodeEmitter to convert this AST into an Expr:

class _ExprCodeEmitter(ast.NodeVisitor):
    def __init__(self):
        self.args = []
        self.return_expr = None
        self.op_map = {
            ast.Add: Op.ADD,
            ast.Sub: Op.SUB,
            ast.Mult: Op.MUL,
            ast.Div: Op.DIV,
        }

    def visit_FunctionDef(self, node):
        self.args = [arg.arg for arg in node.args.args]
        if len(node.body) != 1 or not isinstance(node.body[0], ast.Return):
            raise ASTJITError("Function must consist of a single return statement")
        self.visit(node.body[0])

    def visit_Return(self, node):
        self.return_expr = self.visit(node.value)

    def visit_Name(self, node):
        try:
            idx = self.args.index(node.id)
        except ValueError:
            raise ASTJITError(f"Unknown variable {node.id}")
        return VarExpr(node.id, idx)

    def visit_Constant(self, node):
        return ConstantExpr(node.value)

    def visit_BinOp(self, node):
        left = self.visit(node.left)
        right = self.visit(node.right)
        try:
            op = self.op_map[type(node.op)]
            return BinOpExpr(left, right, op)
        except KeyError:
            raise ASTJITError(f"Unsupported operator {node.op}")

When _ExprCodeEmitter finishes visiting the AST it's given, its return_expr field will contain the Expr representing the function's return value. The wrapper then invokes llvm_jit_evaluate with this Expr.

Note how our decorator interjects into the regular Python execution process. When some_expr is called, instead of the standard Python compilation and execution process (code is compiled into bytecode, which is then executed by the VM), we translate its code to our own representation and emit LLVM from it, and then JIT execute the LLVM IR. While it seems kinda pointless in this artificial example, in reality this means we can execute the function's code in any way we like.

AST JIT case study: Triton

This approach is almost exactly how the Triton language works. The body of a function decorated with @triton.jit gets parsed to a Python AST, which then - through a series of internal IRs - ends up in LLVM IR; this in turn is lowered to PTX by the NVPTX LLVM backend. Then, the code runs on a GPU using a standard CUDA pipeline.

Naturally, the subset of Python that can be compiled down to a GPU is limited; but it's sufficient to run performant kernels, in a language that's much friendlier than CUDA and - more importantly - lives in the same file with the "host" part written in regular Python. For example, if you want testing and debugging, you can run Triton in "interpreter mode" which will just run the same kernels locally on a CPU.

Note that Triton lets us import names from the triton.language package and use them inside kernels; these serve as the intrinsics for the language - special calls the compiler handles directly.

Bytecode-based JIT

Python is a fairly complicated language with a lot of features. Therefore, if our JIT has to support some large portion of Python semantics, it may make sense to leverage more of Python's own compiler. Concretely, we can have it compile the wrapped function all the way to bytecode, and start our translation from there.

Here's the bytecodejit decorator that does just this [1]:

def bytecodejit(func):
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        if kwargs:
            raise BytecodeJITError("Keyword arguments are not supported")

        expr = _emit_exprcode(func)
        return llvm_jit_evaluate(expr, *args)

    return wrapper


def _emit_exprcode(func):
    bc = func.__code__
    stack = []
    for inst in dis.get_instructions(func):
        match inst.opname:
            case "LOAD_FAST":
                idx = inst.arg
                stack.append(VarExpr(bc.co_varnames[idx], idx))
            case "LOAD_CONST":
                stack.append(ConstantExpr(inst.argval))
            case "BINARY_OP":
                right = stack.pop()
                left = stack.pop()
                match inst.argrepr:
                    case "+":
                        stack.append(BinOpExpr(left, right, Op.ADD))
                    case "-":
                        stack.append(BinOpExpr(left, right, Op.SUB))
                    case "*":
                        stack.append(BinOpExpr(left, right, Op.MUL))
                    case "/":
                        stack.append(BinOpExpr(left, right, Op.DIV))
                    case _:
                        raise BytecodeJITError(f"Unsupported operator {inst.argval}")
            case "RETURN_VALUE":
                if len(stack) != 1:
                    raise BytecodeJITError("Invalid stack state")
                return stack.pop()
            case "RESUME" | "CACHE":
                # Skip nops
                pass
            case _:
                raise BytecodeJITError(f"Unsupported opcode {inst.opname}")

The Python VM is a stack machine; so we emulate a stack to convert the function's bytecode to Expr IR (a bit like an RPN evaluator). As before, we then use our llvm_jit_evaluate utility function to lower Expr to LLVM IR and JIT execute it.

Using this JIT is as simple as the previous one - just swap astjit for bytecodejit:

from bytecodejit import bytecodejit

@bytecodejit
def some_expr(a, b, c):
    return b / (a + 2) - c * (b - a)

print(some_expr(2, 16, 3))

Bytecode JIT case study: Numba

Numba is a compiler for Python itself. The idea is that you can speed up specific functions in your code by slapping a numba.njit decorator on them. What happens next is similar in spirit to our simple bytecodejit, but of course much more complicated because it supports a very large portion of Python semantics.

Numba uses the Python compiler to emit bytecode, just as we did; it then converts it into its own IR, and then to LLVM using llvmlite [2].

By starting with the bytecode, Numba makes its life easier (no need to rewrite the entire Python compiler). On the other hand, it also makes some analyses harder, because by the time we're in bytecode, a lot of semantic information existing in higher-level representations is lost. For example, Numba has to sweat a bit to recover control flow information from the bytecode (by running it through a special interpreter first).

Tracing-based JIT

The two approaches we've seen so far are similar in many ways - both rely on Python's introspection capabilities to compile the source code of the JIT-ed function to some extent (one to AST, the other all the way to bytecode), and then work on this lowered representation.

The tracing strategy is very different. It doesn't analyze the source code of the wrapped function at all - instead, it traces its execution by means of specially-boxed arguments, leveraging overloaded operators and functions, and then works on the generated trace.

The code implementing this for our smile demo is surprisingly compact:

def tracejit(func):
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        if kwargs:
            raise TraceJITError("Keyword arguments are not supported")

        argspec = inspect.getfullargspec(func)

        argboxes = []
        for i, arg in enumerate(args):
            if i >= len(argspec.args):
                raise TraceJITError("Too many arguments")
            argboxes.append(_Box(VarExpr(argspec.args[i], i)))

        out_box = func(*argboxes)
        return llvm_jit_evaluate(out_box.expr, *args)

    return wrapper

Each runtime argument of the wrapped function is assigned a VarExpr, and that is placed in a _Box, a placeholder class which lets us do operator overloading:

@dataclass
class _Box:
    expr: Expr

_Box.__add__ = _Box.__radd__ = _register_binary_op(Op.ADD)
_Box.__sub__ = _register_binary_op(Op.SUB)
_Box.__rsub__ = _register_binary_op(Op.SUB, reverse=True)
_Box.__mul__ = _Box.__rmul__ = _register_binary_op(Op.MUL)
_Box.__truediv__ = _register_binary_op(Op.DIV)
_Box.__rtruediv__ = _register_binary_op(Op.DIV, reverse=True)

The remaining key function is _register_binary_op:

def _register_binary_op(opcode, reverse=False):
    """Registers a binary opcode for Boxes.

    If reverse is True, the operation is registered as arg2 <op> arg1,
    instead of arg1 <op> arg2.
    """

    def _op(arg1, arg2):
        if reverse:
            arg1, arg2 = arg2, arg1
        box1 = arg1 if isinstance(arg1, _Box) else _Box(ConstantExpr(arg1))
        box2 = arg2 if isinstance(arg2, _Box) else _Box(ConstantExpr(arg2))
        return _Box(BinOpExpr(box1.expr, box2.expr, opcode))

    return _op

To understand how this works, consider this trivial example:

@tracejit
def add(a, b):
    return a + b

print(add(1, 2))

After the decorated function is defined, add holds the wrapper function defined inside tracejit. When add(1, 2) is called, the wrapper runs:

  1. For each argument of add itself (that is a and b), it creates a new _Box holding a VarExpr. This denotes a named variable in the Expr IR.
  2. It then calls the wrapped function, passing it the boxes as runtime parameters.
  3. When (the wrapped) add runs, it invokes a + b. This is caught by the overloaded __add__ operator of _Box, and it creates a new BinOpExpr with the VarExprs representing a and b as children. This BinOpExpr is then returned [3].
  4. The wrapper unboxes the returned Expr and passes it to llvm_jit_evaluate to emit LLVM IR from it and JIT execute it with the actual runtime arguments of the call: 1, 2.

This might be a little mind-bending at first, because there are two different executions that happen:

  • The first is calling the wrapped add function itself, letting the Python interpreter run it as usual, but with special arguments that build up the IR instead of doing any computations. This is the tracing step.
  • The second is lowering this IR our tracing step built into LLVM IR and then JIT executing it with the actual runtime argument values 1, 2; this is the execution step.

This tracing approach has some interesting characteristics. Since we don't have to analyze the source of the wrapped functions but only trace through the execution, we can "magically" support a much richer set of programs, e.g.:

@tracejit
def use_locals(a, b, c):
    x = a + 2
    y = b - a
    z = c * x
    return y / x - z

print(use_locals(2, 8, 11))

This just works with our basic tracejit. Since Python variables are placeholders (references) for values, our tracing step is oblivious to them - it follows the flow of values. Another example:

@tracejit
def use_loop(a, b, c):
    result = 0
    for i in range(1, 11):
        result += i
    return result + b * c

print(use_loop(10, 2, 3))

This also just works! The created Expr will be a long chain of BinExpr additions of i's runtime values through the loop, added to the BinExpr for b * c.

This last example also leads us to a limitation of the tracing approach; the loop cannot be data-dependent - it cannot depend on the function's arguments, because the tracing step has no concept of runtime values and wouldn't know how many iterations to run through; or at least, it doesn't know this unless we want to perform the tracing run for every runtime execution [4].

The tracing approach is useful in several domains, most notably automatic differentiation (AD). For a slightly deeper taste, check out my radgrad project.

Tracing JIT case study: JAX

The JAX ML framework uses a tracing approach very similar to the one described here. The first code sample in this post shows the JAX notation. JAX cleverly wraps Numpy with its own version which is traced (similar to our _Box, but JAX calls these boxes "tracers"), letting you write regular-feeling Numpy code that can be JIT optimized and executed on accelerators like GPUs and TPUs via XLA. JAX's tracer builds up an underlying IR (called jaxpr) which can then be emitted to XLA ops and passed to XLA for further lowering and execution.

For a fairly deep overview of how JAX works, I recommend reading the autodidax doc.

As mentioned earlier, JAX has some limitations with things like data-dependent control flow in native Python. This won't work, because there's control flow that depends on a runtime value (count):

import jax

@jax.jit
def sum_datadep(a, b, count):
    total = a
    for i in range(count):
        total += b
    return total

print(sum_datadep(10, 3, 3))

When sum_datadep is executed, JAX will throw an exception, saying something like:

This concrete value was not available in Python because it depends on the value of the argument count.

As a remedy, JAX has its own built-in intrinsics from the jax.lax package. Here's the example rewritten in a way that actually works:

import jax
from jax import lax

@jax.jit
def sum_datadep_fori(a, b, count):
    def body(i, total):
        return total + b

    return lax.fori_loop(0, count, body, a)

fori_loop (and many other built-ins in the lax package) is something JAX can trace through, generating a corresponding XLA operation (XLA has support for While loops, to which this lax.fori_loop can be lowered).

The tracing approach has clear benefits for JAX as well; because it only cares about the flow of values, it can handle arbitrarily complicated Python code, as long as the flow of values can be traced. Just like the local variables and data-independent loops shown earlier, but also things like closures. This makes meta-programming and templating easy [5].

Code

The full code for this post is available on GitHub.


[1]Once again, this is a very simplified example. A more realistic translator would have to support many, many more Python bytecode instructions.
[2]In fact, llvmlite itself is a Numba sub-project and is maintained by the Numba team, for which I'm grateful!
[3]For a fun exercise, try adding constant folding to the wrapped _op: when both its arguments are constants (not boxes), instead placing each in a _Box(ConstantExpr(...)), it could perform the mathematical operation on them and return a single constant box. This is a common optimization in compilers!
[4]

In all the JIT approaches showed in this post, the expectation is that compilation happens once, but the compiled function can be executed many times (perhaps in a loop). This means that the compilation step cannot depend on the runtime values of the function's arguments, because it has no access to them. You could say that it does, but that's just for the very first time the function is run (in the tracing approach); it has no way of knowing their values the next times the function will run.

JAX has some provisions for cases where a function is invoked with a small set of runtime values and we want to separately JIT each of them.

[5]A reader pointed out that TensorFlow's AutoGraph feature combines the AST and tracing approaches. TF's eager mode performs tracing, but it also uses AST analyses to rewrite Python loops and conditions into builtins like tf.cond and tf.while_loop.

Hugo van Kemenade: Improving licence metadata [Planet Python]

What? #

PEP 639 defines a spec on how to document licences used in Python projects.

Instead of using a Trove classifier such as “License :: OSI Approved :: BSD License”, which is imprecise (for example, which BSD licence?), the SPDX licence expression syntax is used.

How? #

pypproject.toml #

Change pyproject.toml as follows.

I usually use Hatchling as a build backend, and support was added in 1.27:

 [build-system]
 build-backend = "hatchling.build"
 requires = [
 "hatch-vcs",
- "hatchling",
+ "hatchling>=1.27",
 ]

Replace the freeform license field with a valid SPDX license expression, and add license-files which points to the licence files in the repo. There’s often only one, but if you have more than one, list them all:

 [project]
 ...
-license = { text = "MIT" }
+license = "MIT"
+license-files = [ "LICENSE" ]

Optionally delete the deprecated licence classifier:

 classifiers = [
 "Development Status :: 5 - Production/Stable",
 "Intended Audience :: Developers",
- "License :: OSI Approved :: MIT License",
 "Operating System :: OS Independent",

For example, see humanize#236 and prettytable#350.

Upload #

Then make sure to use a PyPI uploader that supports this.

I recommend using Trusted Publishing which I use with pypa/gh-action-pypi-publish to deploy from GitHub Actions. I didn’t need to make any changes here, just make a release as usual.

Result #

PyPI #

PyPI shows the new metadata:

Screenshot of PyPI showing licence expression: BSD-3-Clause

pip #

pip can also show you the metadata:

❯ pip install prettytable==3.13.0
❯ pip show prettytable
Name: prettytable
Version: 3.13.0
...
License-Expression: BSD-3-Clause
Location: /Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages
Requires: wcwidth
Required-by: norwegianblue, pypistats

Thank you! #

A lot of work went into this. Thank you to PEP authors Philippe Ombredanne for creating the first draft in 2019, to C.A.M. Gerlach for the second draft in 2021, and especially to Karolina Surma for getting the third draft finish line and helping with the implementation.

And many projects were updated to support this, thanks to the maintainers and contributors of at least:


Header photo: Amelia Earhart’s 1932 pilot licence in the San Diego Air and Space Museum Archive, with no known copyright restrictions.

Real Python: The Real Python Podcast – Episode #239: Behavior-Driven vs Test-Driven Development & Using Regex in Python [Planet Python]

What is behavior-driven development, and how does it work alongside test-driven development? How do you communicate requirements between teams in an organization? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Daniel Roy Greenfeld: Building a playing card deck [Planet Python]

Today is Valentine's Day. That makes it the perfect day to write a blog post about showing how to not just build a deck of cards, but show off cards from the heart suite.

Bojan Mihelac: Prefixed Parameters for Django querystring tag [Planet Python]

An overview of Django 5.1's new querystring tag and how to add support for prefixed parameters.

Peter Bengtsson: get in JavaScript is the same as property in Python [Planet Python]

Prefix a function, in an object or class, with `get` and then that acts as a function call without brackets. Just like Python's `property` decorator.

EuroPython: EuroPython February 2025 Newsletter [Planet Python]

Hey ya 👋 

Hope you&aposre all having a fantastic February. We sure have been busy and got some exciting updates for you as we gear up for EuroPython 2025, which is taking place once again in the beautiful city of Prague. So let&aposs dive right in!

🗃️ Community Voting on Talks & Workshops

EuroPython 2025 is right around the corner and our programme team is hard at work putting together an amazing lineup. But we need your help to shape the conference! We received over 572 fantastic proposals, and now it’s time for Community Voting! 🎉 If you&aposve attended EuroPython before or submitted a proposal this year, you’re eligible to vote.

📢 More votes = a stronger, more diverse programme! Spread the word and get your EuroPython friends to cast their votes too.

🏃The deadline is Monday next week, so don’t miss your chance!

🗳️ Vote now: https://ep2025.europython.eu/programme/voting

🧐Call for Reviewers

Want to play a key role in building an incredible conference? Join our review team and help select the best talks for EuroPython 2025! Whether you&aposre a Python expert or an enthusiastic community member, your insights matter.

We’d like to also thank the over 100 people who have already signed up to review! For those who haven’t done so yet, please remember to accept your Pretalx link and get your reviews in by Monday 17th February.

You can already start reviewing proposals, and each review takes as little as 5 minutes. We encourage reviewers to go through at least 20-30 proposals, but if you can do more, even better! With almost 600 submissions to pick from, your help ensures we curate a diverse and engaging programme.

If you&aposre passionate about Python and want to contribute, we’d love to have you. Sign up here: forms.gle/4GTJjwZ1nHBGetM18.

🏃The deadline is Monday next week, so don’t delay!

Got questions? Reach out to us at programme@europython.eu

📣 Community Outreach

EuroPython isn’t just present at other Python events—we actively support them too! As a community sponsor, we love helping local PyCons grow and thrive. We love giving back to the community and strengthening Python events across Europe! 🐍💙

PyCon + Web in Berlin
The EuroPython team had a fantastic time at PyCon + Web in Berlin, meeting fellow Pythonistas, exchanging ideas, and spreading the word about EuroPython 2025. It was great to connect with speakers, organizers, and attendees. 

Ever wondered how long it takes to walk from Berlin to Prague? A huge thank you to our co-organizers, Cheuk, Artur, and Cristián, for answering that in their fantastic lightning talk about EuroPython!

alt

FOSDEM 2025
We had some members of the EuroPython team at FOSDEM 2025, connecting with the open-source community and spreading the Python love! 🎉 We enjoyed meeting fellow enthusiasts, sharing insights about the EuroPython Society, and giving away the first EuroPython 2025 stickers. If you stopped by—thank you and we hope to see you in Prague this July.

alt

🦒 Speaker Mentorship Programme

The signups for The Speaker Mentorship Programme closed on 22nd January 2025. We’re excited to have matched 43 mentees with 24 mentors from our community. We had an increase in the number of mentees who signed up and that’s amazing! We’re glad to be contributing to the journey of new speakers in the Python community. A massive thank you to our mentors for supporting the mentees and to our mentees; we’re proud of you for taking this step in your journey as a speaker. 

26 mentees submitted at least 1 proposal. Out of this number, 13 mentees submitted 1 proposal, 9 mentees submitted 2 proposals, 2 mentees submitted 3 proposals, 1 mentee submitted 4 proposals and lastly, 1 mentee submitted 5 proposals. We wish our mentees the best of luck. We look forward to the acceptance of their proposals.

In a few weeks, we will host an online panel session with 2–3 experienced community members who will share their advice with first-time speakers. At the end of the panel, there will be a Q&A session to answer all the participants’ questions.

You can watch the recording of the previous year’s workshop here:

💰Sponsorship

EuroPython is one of the largest Python conferences in Europe, and it wouldn’t be possible without our sponsors. We are so grateful for the companies who have already expressed interest. If you’re interested in sponsoring EuroPython 2025 as well, please reach out to us at sponsoring@europython.eu.

🎤 EuroPython Speakers Share Their Experiences

We asked our past speakers to share their experiences speaking at EuroPython. These videos have been published on YouTube as shorts, and we&aposve compiled them into brief clips for you to watch.

A big thanks goes to Sebastian Witowski, Jan Smitka, Yuliia Barabash, Jodie Burchell, Max Kahan, and Cheuk Ting Ho for sharing their experiences.

Why You Should Submit a Proposal for EuroPython? Part 2

Why You Should Submit a Proposal for EuroPython? Part 3

📊 EuroPython Society Board Report 

The EuroPython conference wouldn’t be what it is without the incredible volunteers who make it all happen. 💞 Behind the scenes, there’s also the EuroPython Society—a volunteer-led non-profit that manages the fiscal and legal aspects of running the conference, oversees its organization, and works on a few smaller projects like the grants programme. To keep everyone in the loop and promote transparency, the Board is sharing regular updates on what we’re working on.

The January board report is ready: https://europython-society.org/board-report-for-january-2025/

🐍 Upcoming Events in the Python Community

That&aposs all for now! Keep an eye on your inbox and our website for more news and announcements. We&aposre counting down the days until we can come together in Prague to celebrate our shared love for Python. 🐍❤️

Cheers,
The EuroPython Team

Kay Hayen: Nuitka Release 2.6 [Planet Python]

This is to inform you about the new stable release of Nuitka. It is the extremely compatible Python compiler, “download now”.

This release has all-around improvements, with a lot effort spent on bug fixes in the memory leak domain, and preparatory actions for scalability improvements.

Bug Fixes

  • MSYS2: Path normalization to native Windows format was required in more places for the MinGW variant of MSYS2.

    The os.path.normpath function doesn’t normalize to native Win32 paths with MSYS2, instead using forward slashes. This required manual normalization in additional areas. (Fixed in 2.5.1)

  • UI: Fix, give a proper error when extension modules asked to include failed to be located. instead of a proper error message. (Fixed in 2.5.1)

  • Fix, files with illegal module names (containing .) in their basename were incorrectly considered as potential sub-modules for --include-package. These are now skipped. (Fixed in 2.5.1)

  • Stubgen: Improved stability by preventing crashes when stubgen encounters code it cannot handle. Exceptions from it are now ignored. (Fixed in 2.5.1)

  • Stubgen: Addressed a crash that occurred when encountering assignments to non-variables. (Fixed in 2.5.1)

  • Python 3: Fixed a regression introduced in 2.5 release that could lead to segmentation faults in exception handling for generators. (Fixed in 2.5.2)

  • Python 3.11+: Corrected an issue where dictionary copies of large split directories could become corrupted. This primarily affected instance dictionaries, which are created as copies until updated, potentially causing problems when adding new keys. (Fixed in 2.5.2)

  • Python 3.11+: Removed the assumption that module dictionaries always contain only strings as keys. Some modules, like Foundation on macOS, use non-string keys. (Fixed in 2.5.2)

  • Deployment: Ensured that the --deployment option correctly affects the C compilation process. Previously, only individual disables were applied. (Fixed in 2.5.2)

  • Compatibility: Fixed a crash that could occur during compilation when unary operations were used within binary operations. (Fixed in 2.5.3)

  • Onefile: Corrected the handling of __compiled__.original_argv0, which could lead to crashes. (Fixed in 2.5.4)

  • Compatibility: Resolved a segmentation fault occurring at runtime when calling tensorflow.function with only keyword arguments. (Fixed in 2.5.5)

  • macOS: Harmless warnings generated for x64 DLLs on arm64 with newer macOS versions are now ignored. (Fixed in 2.5.5)

  • Python 3.13: Addressed a crash in Nuitka’s dictionary code that occurred when copying dictionaries due to internal changes in Python 3.13. (Fixed in 2.5.6)

  • macOS: Improved onefile mode signing by applying --macos-signed-app-name to the signature of binaries, not just app bundles. (Fixed in 2.5.6)

  • Standalone: Corrected an issue where too many paths were added as extra directories from the Nuitka package configuration. This primarily affected the win32com package, which currently relies on the package-dirs import hack. (Fixed in 2.5.6)

  • Python 2: Prevented crashes on macOS when creating onefile bundles with Python 2 by handling negative CRC32 values. This issue may have affected other versions as well. (Fixed in 2.5.6)

  • Plugins: Restored the functionality of code provided in pre-import-code, which was no longer being applied due to a regression. (Fixed in 2.5.6)

  • macOS: Suppressed the app bundle mode recommendation when it is already in use. (Fixed in 2.5.6)

  • macOS: Corrected path normalization when the output directory argument includes “~”.

  • macOS: GitHub Actions Python is now correctly identified as a Homebrew Python to ensure proper DLL resolution. (Fixed in 2.5.7)

  • Compatibility: Fixed a reference leak that could occur with values sent to generator objects. Asyncgen and coroutines were not affected. (Fixed in 2.5.7)

  • Standalone: The --include-package scan now correctly handles cases where both a package init file and competing Python files exist, preventing compile-time conflicts. (Fixed in 2.5.7)

  • Modules: Resolved an issue where handling string constants in modules created for Python 3.12 could trigger assertions, and modules created with 3.12.7 or newer failed to load on older Python 3.12 versions when compiled with Nuitka 2.5.5-2.5.6. (Fixed in 2.5.7)

  • Python 3.10+: Corrected the tuple code used when calling certain method descriptors. This issue primarily affected a Python 2 assertion, which was not impacted in practice. (Fixed in 2.5.7)

  • Python 3.13: Updated resource readers to accept multiple arguments for importlib.resources.read_text, and correctly handle encoding and errors as keyword-only arguments.

  • Scons: The platform encoding is no longer used to decode ccache logs. Instead, latin1 is used, as it is sufficient for matching filenames across log lines and avoids potential encoding errors. (Fixed in 2.5.7)

  • Python 3.12+: Requests to statically link libraries for hacl are now ignored, as these libraries do not exist. (Fixed in 2.5.7)

  • Compatibility: Fixed a memory leak affecting the results of functions called via specs. This primarily impacted overloaded hard import operations. (Fixed in 2.5.7)

  • Standalone: When multiple distributions for a package are found, the one with the most accurate file matching is now selected. This improves handling of cases where an older version of a package (e.g., python-opencv) is overwritten with a different variant (e.g., python-opencv-headless), ensuring the correct version is used for Nuitka package configuration and reporting. (Fixed in 2.5.8)

  • Python 2: Prevented a potential crash during onefile initialization on Python 2 by passing the directory name directly from the onefile bootstrap, avoiding the use of os.dirname which may not be fully loaded at that point. (Fixed in 2.5.8)

  • Anaconda: Preserved necessary PATH environment variables on Windows for packages that require loading DLLs from those locations. Only PATH entries not pointing inside the installation prefix are removed. (Fixed in 2.5.8)

  • Anaconda: Corrected the is_conda_package check to function properly when distribution names and package names differ. (Fixed in 2.5.8)

  • Anaconda: Improved package name resolution for Anaconda distributions by checking conda metadata when file metadata is unavailable through the usual methods. (Fixed in 2.5.8)

  • MSYS2: Normalized the downloaded gcc path to use native Windows slashes, preventing potential compilation failures. (Fixed in 2.5.9)

  • Python 3.13: Restored static libpython functionality on Linux by adapting to a signature change in an unexposed API. (Fixed in 2.5.9)

  • Python 3.6+: Prevented asyncgen from being resurrected when a finalizer is attached, resolving memory leaks that could occur with asyncio in the presence of exceptions. (Fixed in 2.5.10)

  • UI: Suppressed the gcc download prompt that could appear during --version output on Windows systems without MSVC or with an improperly installed gcc.

  • Ensured compatibility with monkey patched os.lstat or os.stat functions, which are used in some testing scenarios.

  • Data Composer: Improved the determinism of the JSON statistics output by sorting keys, enabling reliable build comparisons.

  • Python 3.6+: Fixed a memory leak in asyncgen with finalizers, which could lead to significant memory consumption when using asyncio and encountering exceptions.

  • Scons: Optimized empty generators (an optimization result) to avoid generating unused context code, eliminating C compilation warnings.

  • Python 3.6+: Fixed a reference leak affecting the asend value in asyncgen. While typically None, this could lead to observable reference leaks in certain cases.

  • Python 3.5+: Improved handling of coroutine and asyncgen resurrection, preventing memory leaks with asyncio and asyncgen, and ensuring correct execution of finally code in coroutines.

  • Python 3: Corrected the handling of generator objects resurrecting during deallocation. While not explicitly demonstrated, this addresses potential issues similar to those encountered with coroutines, particularly for old-style coroutines created with the types.coroutine decorator.

  • PGO: Fixed a potential crash during runtime trace collection by ensuring timely initialization of the output mechanism.

Package Support

  • Standalone: Added inclusion of metadata for jupyter_client to support its own usage of metadata. (Added in 2.5.1)

  • Standalone: Added support for the llama_cpp package. (Added in 2.5.1)

  • Standalone: Added support for the litellm package. (Added in 2.5.2)

  • Standalone: Added support for the lab_lamma package. (Added in 2.5.2)

  • Standalone: Added support for docling metadata. (Added in 2.5.5)

  • Standalone: Added support for pypdfium on Linux. (Added in 2.5.5)

  • Standalone: Added support for using the debian package. (Added in 2.5.5)

  • Standalone: Added support for the pdfminer package. (Added in 2.5.5)

  • Standalone: Included missing dependencies for the torch._dynamo.polyfills package. (Added in 2.5.6)

  • Standalone: Added support for rtree on Linux. The previous static configuration only worked on Windows and macOS; this update detects it from the module code. (Added in 2.5.6)

  • Standalone: Added missing pywebview JavaScript data files. (Added in 2.5.7)

  • Standalone: Added support for newer versions of the sklearn package. (Added in 2.5.7)

  • Standalone: Added support for newer versions of the dask package. (Added in 2.5.7)

  • Standalone: Added support for newer versions of the transformers package. (Added in 2.5.7)

  • Windows: Placed numpy DLLs at the top level for improved support in the Nuitka VM. (Added in 2.5.7)

  • Standalone: Allowed excluding browsers when including playwright. (Added in 2.5.7)

  • Standalone: Added support for newer versions of the sqlfluff package. (Added in 2.5.8)

  • Standalone: Added support for the opencv conda package, disabling unnecessary workarounds for its dependencies. (Added in 2.5.8)

  • Standalone: Added support for newer versions of the soundfile package.

  • Standalone: Added support for newer versions of the coincurve package.

  • Standalone: Added support for newer versions of the apscheduler package.

  • macOS: Removed the error and workaround forcing that required bundle mode for PyQt5 on macOS, as standalone mode now appears to function correctly.

  • Standalone: Added support for seleniumbase package downloads.

New Features

  • Module: Implemented 2-phase loading for all modules in Python 3.5 and higher. This improves loading modules as sub-packages in Python 3.12+, where the loading context is no longer accessible.

  • UI: Introduced the app value for the --mode parameter. This creates an app bundle on macOS and a onefile binary on other platforms, replacing the --macos-create-app-bundle option. (Added in 2.5.5)

  • UI: Added a package mode, similar to module, which automatically includes all sub-modules of a package without requiring manual specification with --include-package.

  • Module: Added an option to completely disable the use of stubgen. (Added in 2.5.1)

  • Homebrew: Added support for tcl9 with the tk-inter plugin.

  • Package Resolution: Improved handling of multiple distributions installed for the same package name. Nuitka now attempts to identify the most recently installed distribution, enabling proper recognition of different versions in scenarios like python-opencv and python-opencv-headless.

  • Python 3.13.1 Compatibility: Addressed an issue where a workaround introduced for Python 3.10.0 broke standalone mode in Python 3.13.1. (Added in 2.5.6)

  • Plugins: Introduced a new feature for absolute source paths (typically derived from variables or relative to constants). This offers greater flexibility compared to the by_code DLL feature, which may be removed in the future. (Added in 2.5.6)

  • Plugins: Added support for when conditions in variable sections within Nuitka Package configuration.

  • macOS: App bundles now automatically switch to the containing directory when not launched from the command line. This prevents the current directory from defaulting to /, which is rarely correct and can be unexpected for users. (Added in 2.5.6)

  • Compatibility: Relaxed the restriction on setting the compiled frame f_trace. Instead of outright rejection, the deployment flag --no-deployment-flag=frame-useless-set-trace can be used to allow it, although it will be ignored.

  • Windows: Added the ability to detect extension module entry points using an inline copy of pefile. This enables --list-package-dlls to verify extension module validity on the platform. It also opens possibilities for automatic extension module detection on major operating systems.

  • Watch: Added support for using conda packages instead of PyPI packages.

  • UI: Introduced --list-package-exe to complement --list-package-dlls for package analysis when creating Nuitka Package Configuration.

  • Windows ARM: Removed workarounds that are no longer necessary for compilation. While the lack of dependency analysis might require correction in a hotfix, this configuration should now be supported.

Optimization

  • Scalability: Implemented experimental code for more compact code object usage, leading to more scalable C code and constants usage. This is expected to speed up C compilation and code generation in the future once fully validated.

  • Scons: Added support for C23 embedding of the constants blob. This will be utilized with Clang 19+ and GCC 15+, except on Windows and macOS where other methods are currently employed.

  • Compilation: Improved performance by avoiding redundant path checks in cases of duplicated package directories. This significantly speeds up certain scenarios where file system access is slow.

  • Scons: Enhanced detection of static libpython, including for self-compiled, uninstalled Python installations.

Anti-Bloat

  • Improved no_docstrings support for the xgboost package. (Added in 2.5.7)

  • Avoided unnecessary usage of numpy for the PIL package.

  • Avoided unnecessary usage of yaml for the numpy package.

  • Excluded tcltest TCL code when using tk-inter, as these TCL files are unused.

  • Avoided using IPython from the comm package.

  • Avoided using pytest from the pdbp package.

Organizational

  • UI: Added categories for plugins in the --help output. Non-package support plugin options are now shown by default. Introduced a dedicated --help-plugins option and highlighted it in the general --help output. This allows viewing all plugin options without needing to enable a specific plugin.

  • UI: Improved warnings for onefile and OS-specific options. These warnings are now displayed unless the command originates from a Nuitka-Action context, where users typically build for different modes with a single configuration set.

  • Nuitka-Action: The default mode is now app, building an application bundle on macOS and a onefile binary on other platforms.

  • UI: The executable path in --version output now uses the report path. This avoids exposing the user’s home directory, encouraging more complete output sharing.

  • UI: The Python flavor name is now included in the startup compilation message.

  • UI: Improved handling of missing Windows version information. If only partial version information (e.g., product or file version) is provided, an explicit error is given instead of an assertion error during post-processing.

  • UI: Corrected an issue where the container argument for run-inside-nuitka-container could not be a non-template file. (Fixed in 2.5.2)

  • Release: The PyPI upload sdist creation now uses a virtual environment. This ensures consistent project name casing, as it is determined by the setuptools version. While currently using the deprecated filename format, this change prepares for the new format.

  • Release: The osc binary is now used from the virtual environment to avoid potential issues with a broken system installation, as currently observed on Ubuntu.

  • Debugging: Added an experimental option to disable the automatic conversion to short paths on Windows.

  • UI: Improved handling of external data files that overwrite the original file. Nuitka now prompts the user to provide an output directory to prevent unintended overwrites. (Added in 2.5.6)

  • UI: Introduced the alias --include-data-files-external for the external data files option. This clarifies that the feature is not specific to onefile mode and encourages its wider use.

  • UI: Allowed none as a valid value for the macOS icon option. This disables the warning about a missing icon when intentionally not providing one.

  • UI: Added an error check for icon filenames without suffixes, preventing cases where the file type cannot be inferred.

  • UI: Corrected the examples for --include-package-data with file patterns, which used incorrect delimiters.

  • Scons: Added a warning about using gcc with LTO when make is unavailable, as this combination will not work. This provides a clearer message than the standard gcc warnings, which can be difficult for Python users to interpret.

  • Debugging: Added an option to preserve printing during reference count tests. This can be helpful for debugging by providing additional trace information.

  • Debugging: Added a small code snippet for module reference leak testing to the Developer Manual.

Tests

  • Temporarily disabled tests that expose regressions in Python 3.13.1 that mean not to follow.

  • Improved test organization by using more common code for package tests. The scanning for test cases and main files now utilizes shared code.

  • Added support for testing variations of a test with different extra flags. This is achieved by exposing a NUITKA_TEST_VARIANT environment variable.

  • Improved detection of commercial-only test cases by identifying them through their names rather than hardcoding them in the runner. These tests are now removed from the standard distribution to reduce clutter.

  • Utilized --mode options in tests for better control and clarity. Standalone mode tests now explicitly check for the application of the mode and error out if it’s missing. Mode options are added to the project options of each test case instead of requiring global configuration.

  • Added a test case to ensure comprehensive coverage of external data file usage in onefile mode. This helps detect regressions that may have gone unnoticed previously.

  • Increased test coverage for coroutines and async generators, including checks for inspect.isawaitable and testing both function and context objects.

Cleanups

  • Unified the code used for generating source archives for PyPI uploads, ensuring consistency between production and standard archives.

  • Harmonized the usage of include <...> vs include "..." based on the origin of the included files, improving code style consistency.

  • Removed code duplication in the exception handler generator code by utilizing the DROP_GENERATOR_EXCEPTION functions.

  • Updated Python version checks to reflect current compatibility. Checks for >=3.4 were changed to >=3, and outdated references to Python 3.3 in comments were updated to simply “Python 3”.

  • Scons: Simplified and streamlined the code for the command options. An OrderedDict is now used to ensure more stable build outputs and prevent unnecessary differences in recorded output.

  • Improved the executeToolChecked function by adding an argument to indicate whether decoding of returned bytes output to unicode is desired. This eliminates redundant decoding in many places.

Summary

This a major release that it consolidates Nuitka big time.

The scalability work has progressed, even if no immediately visible effects are there yet, the next releases will have them, as this is the main area of improvement these days.

The memory leaks found are very important and very old, this is the first time that asyncio should be working perfect with Nuitka, it was usable before, but compatibility is now much higher.

Also, this release puts out a much nicer help output and handling of plugins help, which no longer needs tricks to see a plugin option that is not enabled (yet), during --help. The user interface is hopefully more clean due to it.

Giampaolo Rodola: psutil: drop Python 2.7 support [Planet Python]

About dropping Python 2.7 support in psutil, 3 years ago I stated:

Not a chance, for many years to come. [Python 2.7] currently represents 7-10% of total downloads, meaning around 70k / 100k downloads per day.

Only 3 years later, and to my surprise, downloads for Python 2.7 dropped to 0.36%! As such, as of psutil 7.0.0, I finally decided to drop support for Python 2.7!

The numbers

These are downloads per month:

$ pypinfo --percent psutil pyversion
Served from cache: False
Data processed: 4.65 GiB
Data billed: 4.65 GiB
Estimated cost: $0.03

| python_version | percent | download_count |
| -------------- | ------- | -------------- |
| 3.10           |  23.84% |     26,354,506 |
| 3.8            |  18.87% |     20,862,015 |
| 3.7            |  17.38% |     19,217,960 |
| 3.9            |  17.00% |     18,798,843 |
| 3.11           |  13.63% |     15,066,706 |
| 3.12           |   7.01% |      7,754,751 |
| 3.13           |   1.15% |      1,267,008 |
| 3.6            |   0.73% |        803,189 |
| 2.7            |   0.36% |        402,111 |
| 3.5            |   0.03% |         28,656 |
| Total          |         |    110,555,745 |

According to pypistats.org Python 2.7 downloads represents the 0.28% of the total, around 15.000 downloads per day.

The pain

Maintaining 2.7 support in psutil had become increasingly difficult, but still possible. E.g. I could still run tests by using old PYPI backports. GitHub Actions could still be tweaked to run tests and produce 2.7 wheels on Linux and macOS. Not on Windows though, for which I had to use a separate service (Appveyor). Still, the amount of hacks in psutil source code necessary to support Python 2.7 piled up over the years, and became quite big. Some disadvantages that come to mind:

  • Having to maintain a Python compatibility layers like psutil/_compat.py. This translated in extra extra code and extra imports.
  • The C compatibility layer to differentiate between Python 2 and 3 (#if PY_MAJOR_VERSION <= 3, etc.).
  • Dealing with the string vs. unicode differences, both in Python and in C.
  • Inability to use modern language features, especially f-strings.
  • Inability to freely use enums, which created a difference on how CONSTANTS were exposed in terms of API.
  • Having to install a specific version of pip and other (outdated) deps.
  • Relying on the third-party Appveyor CI service to run tests and produce 2.7 wheels.
  • Running 4 extra CI jobs on every commit (Linux, macOS, Windows 32-bit, Windows 64-bit) making the CI slower and more subject to failures (we have quite a bit of flaky tests).
  • The distribution of 7 wheels specific for Python 2.7. E.g. in the previous release I had to upload:
psutil-6.1.1-cp27-cp27m-macosx_10_9_x86_64.whl
psutil-6.1.1-cp27-none-win32.whl
psutil-6.1.1-cp27-none-win_amd64.whl
psutil-6.1.1-cp27-cp27m-manylinux2010_i686.whl
psutil-6.1.1-cp27-cp27m-manylinux2010_x86_64.whl
psutil-6.1.1-cp27-cp27mu-manylinux2010_i686.whl
psutil-6.1.1-cp27-cp27mu-manylinux2010_x86_64.whl

The removal

The removal was done in PR-2841, which removed around 1500 lines of code (nice!). It felt liberating. In doing so, in the doc I still made the promise that the 6.1.* serie will keep supporting Python 2.7 and will receive critical bug-fixes only (no new features). It will be maintained in a specific python2 branch. I explicitly kept the setup.py script compatible with Python 2.7 in terms of syntax, so that, when the tarball is fetched from PYPI, it will emit an informative error message on pip install psutil. The user trying to install psutil on Python 2.7 will see:

$ pip2 install psutil
As of version 7.0.0 psutil no longer supports Python 2.7.
Latest version supporting Python 2.7 is psutil 6.1.X.
Install it with: "pip2 install psutil==6.1.*".

As the informative message states, users that are still on Python 2.7 can still use psutil with:

pip2 install psutil==6.1.*

Related tickets

Django Weblog: DSF member of the month - Lily Foote [Planet Python]

For February 2025, we welcome Lily Foote (@lilyf) as our DSF member of the month! ⭐

Lily Foote is a contributor to Django core for many years, especially on the ORM. She is currently a member of the Django 6.x Steering Council and she has been a DSF member since March 2021.
You can learn more about Lily by visiting her GitHub profile.

Let’s spend some time getting to know Lily better!

Can you tell us a little about yourself (hobbies, education, etc)

My name is Lily Foote and I’ve been contributing to Django for most of my career. I’ve also recently got into Rust and I’m excited about using Rust in Python projects. When I’m not programming, I love hiking, climbing and dancing (Ceilidh)! I also really enjoying playing board games and role playing games (e.g. Dungeons and Dragons).

How did you start using Django?

I’d taught myself Python in my final year at university by doing Project Euler problems and then decided I wanted to learn how to make a website. Django was the first Python web framework I looked at and it worked really well for me.

What other framework do you know and if there is anything you would like to have in Django if you had magical powers?

I’ve done a small amount with Flask and FastAPI. More than any new features, I think the thing that I’d most like to see is more long-term contributors to spread the work of keeping Django awesome.

What projects are you working on now?

The side project I’m most excited about is Django Rusty Templates, which is a re-implementation of Django’s templating language in Rust.

Which Django libraries are your favorite (core or 3rd party)?

The ORM of course!

What are the top three things in Django that you like?

Django Conferences, the mentorship program Djangonaut Space and the whole community!

You have been a mentor multiple times with GSoC and Djangonaut Space program, what is required according to you to be a good mentor?

I think being willing to invest time is really important. Checking in with your mentees frequently and being an early reviewer of their work. I think this helps keep their motivation up and allows for small corrections early on.

Any advice for future contributors?

Start small and as you get more familiar with Django and the process of contributing you can take on bigger issues. Also be patient with reviewers – Django has high standards, but is mostly maintained by volunteers with limited time.

Yes! It’s a huge honour! Since January, we’ve been meeting weekly and it feels like we’ve hardly scratched the surface of what we want to achieve. The biggest thing we’re trying to tackle is how to improve the contribution experience – especially evaluating new feature ideas – without draining everyone’s time and energy.

You have a lot of knowledge in the Django ORM, how did you start to contribute to this part?

I added the Greatest and Least expressions in Django 1.9, with the support of one of the core team at the time. After that, I kept showing up (especially at conference sprints) and finding a new thing to tackle.

Is there anything else you’d like to say?

Thanks for having me on!


Thank you for doing the interview, Lily!

Python Morsels: Newlines and escape sequences in Python [Planet Python]

Python allows us to represent newlines in strings using the \n "escape sequence" and Python uses line ending normalization when reading and writing with files.

Newline characters

This string contains a newline character:

>>> text = "Hello\nworld"
>>> text
'Hello\nworld'

That's what \n represents: a newline character.

If we print this string, we'll see that \n becomes an actual newline:

>>> print(text)
Hello
world

Why does Python represent a newline as \n?

Escape sequences in Python

Every character in a Python …

Read the full article: https://www.pythonmorsels.com/newlines-and-escape-sequences/

Streamline Your Logs: Exploring Rsyslog for Effective System Log Management on Ubuntu [Linux Journal - The Original Magazine of the Linux Community]

Streamline Your Logs: Exploring Rsyslog for Effective System Log Management on Ubuntu

Introduction

In the world of system administration, effective log management is crucial for troubleshooting, security monitoring, and ensuring system stability. Logs provide valuable insights into system activities, errors, and security incidents. Ubuntu, like most Linux distributions, relies on a logging mechanism to track system and application events.

One of the most powerful logging systems available on Ubuntu is Rsyslog. It extends the traditional syslog functionality with advanced features such as filtering, forwarding logs over networks, and log rotation. This article provides guide on managing system logs with Rsyslog on Ubuntu, covering installation, configuration, remote logging, troubleshooting, and advanced features.

Understanding Rsyslog

What is Rsyslog?

Rsyslog (Rocket-fast System for Log Processing) is an enhanced syslog daemon that allows for high-performance log processing, filtering, and forwarding. It is designed to handle massive volumes of logs efficiently and provides robust features such as:

  • Multi-threaded log processing

  • Log filtering based on various criteria

  • Support for different log formats (e.g., JSON, CSV)

  • Secure log transmission via TCP, UDP, and TLS

  • Log forwarding to remote servers

  • Writing logs to databases

Rsyslog is the default logging system in Ubuntu 20.04 LTS and later and is commonly used in enterprise environments.

Installing and Configuring Rsyslog

Checking if Rsyslog is Installed

Before installing Rsyslog, check if it is already installed and running with the following command:

systemctl status rsyslog

If the output shows active (running), then Rsyslog is installed. If not, you can install it using:

sudo apt update
sudo apt install rsyslog -y

Once installed, enable and start the Rsyslog service:

sudo systemctl enable rsyslog
sudo systemctl start rsyslog

To verify Rsyslog’s status, run:

systemctl status rsyslog

Understanding Rsyslog Configuration

Rsyslog Configuration Files

Rsyslog’s primary configuration files are:

  • /etc/rsyslog.conf – The main configuration file

  • /etc/rsyslog.d/ – Directory for additional configuration files

Basic Configuration Syntax

Rsyslog uses a facility, severity, action model:

Linux Networking Protocols: Understanding TCP/IP, UDP, and ICMP [Linux Journal - The Original Magazine of the Linux Community]

Linux Networking Protocols: Understanding TCP/IP, UDP, and ICMP

Introduction

In the world of Linux networking, protocols play a crucial role in enabling seamless communication between devices. Whether you're browsing the internet, streaming videos, or troubleshooting network issues, underlying networking protocols such as TCP/IP, UDP, and ICMP are responsible for the smooth transmission of data packets. Understanding these protocols is essential for system administrators, network engineers, and even software developers working with networked applications.

This article provides an exploration of the key Linux networking protocols: TCP (Transmission Control Protocol), UDP (User Datagram Protocol), and ICMP (Internet Control Message Protocol). We will examine their working principles, advantages, differences, and practical use cases in Linux environments.

The TCP/IP Model: The Foundation of Modern Networking

What is the TCP/IP Model?

The TCP/IP model (Transmission Control Protocol/Internet Protocol) serves as the backbone of modern networking, defining how data is transmitted across interconnected networks. It consists of four layers:

  • Application Layer: Handles high-level protocols like HTTP, FTP, SSH, and DNS.

  • Transport Layer: Ensures reliable or fast data delivery via TCP or UDP.

  • Internet Layer: Manages addressing and routing with IP and ICMP.

  • Network Access Layer: Deals with physical transmission methods such as Ethernet and Wi-Fi.

The TCP/IP model is simpler than the traditional OSI model but still retains the fundamental networking concepts necessary for communication.

Transmission Control Protocol (TCP): Ensuring Reliable Data Transfer

What is TCP?

TCP is a connection-oriented protocol that ensures data is delivered accurately and in order. It is widely used in scenarios where reliability is crucial, such as web browsing, email, and file transfers.

Key Features of TCP:
  • Reliable Transmission: Uses acknowledgments (ACKs) and retransmissions to ensure data integrity.

  • Connection-Oriented: Establishes a dedicated connection before data transmission.

  • Ordered Delivery: Maintains the correct sequence of data packets.

  • Error Checking: Uses checksums to detect transmission errors.

How TCP Works:
  1. Connection Establishment – The Three-Way Handshake:

12-02-2025

20:31

Asahi Linux Lead Developer Hector Martin Resigns From Linux Kernel [Slashdot: Linux]

Asahi lead developer Hector Martin, writing in an email: I no longer have any faith left in the kernel development process or community management approach. Apple/ARM platform development will continue downstream. If I feel like sending some patches upstream in the future myself for whatever subtree I may, or I may not. Anyone who feels like fighting the upstreaming fight themselves is welcome to do so. The Register points out that the action follows this interaction with Linux Torvalds. Hector Martin: If shaming on social media does not work, then tell me what does, because I'm out of ideas. Linus Torvalds: How about you accept the fact that maybe the problem is you. You think you know better. But the current process works. It has problems, but problems are a fact of life. There is no perfect. However, I will say that the social media brigading just makes me not want to have anything at all to do with your approach. Because if we have issues in the kernel development model, then social media sure as hell isn't the solution.

Read more of this story at Slashdot.

ONLYOFFICE 8.3 Released, Now Supports Apple iWork Files [OMG! Ubuntu!]

A new version of ONLYOFFICE Desktop Editors, a free, open-source office suite for Windows, macOS, and Linux, is now available to download. ONLYOFFICE 8.3 brings a bunch of new features and nimble enhancements spread throughout the full suite, which is composed of word processor, spreadsheet, presentation, form, and PDF editing apps. Such as? Well, the headline feature is the ability to open and work with Apple iWork documents (.pages, .numbers, .key) and Hancom Office files (.hwp, .hwpx) . Opening these documents will convert them to OOXML to support editing. It’s not possible to edit the native files themselves, nor export/save edits back […]

You're reading ONLYOFFICE 8.3 Released, Now Supports Apple iWork Files, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

How to Disable ‘App is Ready’ Notifications in Ubuntu [OMG! Ubuntu!]

Finding yourself annoyed at those ‘window is ready’ notifications which pop-up when you open some apps in GNOME Shell on Ubuntu? If so, you can disable them by installing a GNOME Shell extension. Now, notifications are helpful—heck, vital when they inform, alert, or indicate that something requires our immediate attention or actioning. But “app is ready” notifications? I don’t find them anything other than obvious. I’m not amnesic; I know the app is ready – I just opened it! They aren’t predictable either. Some apps show them, others don’t. It depends on the app’s metadata, how fast app initialisation is (you’ll see them more […]

You're reading How to Disable ‘App is Ready’ Notifications in Ubuntu, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

LibreOffice 25.2 Released, This is What’s New [OMG! Ubuntu!]

LibreOffice 25.2 has been released, this year’s first major update to the leading open-source office software for Windows, macOS, and Linux. As you’d expect, the update delivers a sizeable set of changes spread throughout the productivity suite, including notable UI changes, accessibility improvements, and more important interoperability buffs to support cross-suite workflows. It’s important to remember that open-source software like LibreOffice doesn’t appear out of thin air; it’s made by humans, many unpaid, others paid to work on specific parts only. We all have personal wish-lists of features and changes we want our favourite open-source apps to add, but we […]

You're reading LibreOffice 25.2 Released, This is What’s New, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Installing Ubuntu on WSL in Windows 11 is Now Easier [OMG! Ubuntu!]

Ubuntu + WSL (new ubuntu logo)Windows Subsystem for Linux (WSL) user? If so, you will be pleased to hear that Ubuntu is now available in Microsoft’s new tar-based distro format — no need to use the sluggish Microsoft Store. Canonical announced the news today, noting that “the new tar-based WSL distro format allows developers and system administrators to distribute, install, and manage Ubuntu WSL instances from tar files without relying on the Microsoft Store.” In not relying on the Microsoft Store for distribution, it’s less hassle for enterprises to roll out (and customise) Ubuntu on WSL at scale as images packaged in using the new […]

You're reading Installing Ubuntu on WSL in Windows 11 is Now Easier, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Firefox 135 Brings New Tab Page Tweaks, AI Chatbot Access + More [OMG! Ubuntu!]

Right on schedule, a new update to the Mozilla Firefox web browser is available for download. Last month’s Firefox 134 release saw the New Tab page layout refreshed for users in the United States, let Linux go hands-on with touch-hold gestures, seeded Ecosia search engine, and fine-tuned the performance of the built-in pop-up blocker. Firefox 135, as is probably intuit, brings an equally sizeable set of changes to the fore including a wider rollout of its new New Tab page layout to all locales where Stories are available: It’s not a massive makeover, granted. But the new layout adjusts the […]

You're reading Firefox 135 Brings New Tab Page Tweaks, AI Chatbot Access + More, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

How to Fix Spotify ‘No PubKey’ Error on Ubuntu [OMG! Ubuntu!]

Do you use the official Spotify DEB on Ubuntu (or an Ubuntu-based Linux distribution like Linux Mint)? If so, you’ll be used to receiving updates to the Spotify Linux client direct from the official Spotify APT repo, right alongside all your other DEB-based software. Thing is: if you haven’t checked for updates from the command line recently you might not be aware the that security key used to ‘sign’ packages from the Spotify APT repo stopped working at the end of last year. Annoying, but not catastrophic as it—thankfully—doesn’t stop the Spotify Linux app from working just pollutes terminal output […]

You're reading How to Fix Spotify ‘No PubKey’ Error on Ubuntu, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Linux Icon Pack Papirus Gets First Update in 8 Months [OMG! Ubuntu!]

papirus icon themeFans of the Papirus icon theme for Linux desktops will be happy hear a new version is now available to download. Paprius‘s first update in 2025 improves support for KDE Plasma 6 by adding Konversation, KTorrent and RedShift tray icons, KDE and Plasma logo glyphs for use in ‘start menu’ analogues, as well as an assortment of symbolic icons. Retro gaming fans will appreciate an expansion in mime type support in this update. Papirus now includes file icons for ROMs used for emulating ZX Spectrum, SEGA Dreamcast, SEGA Saturn, MSX, and Neo Geo Pocket consoles; and Papirus now uses different […]

You're reading Linux Icon Pack Papirus Gets First Update in 8 Months, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

GNOME Introduces New UI & Monospace Adwaita Fonts [OMG! Ubuntu!]

GNOME logo with the number 48GNOME has announced a change to its default UI and monospace fonts ahead of the upcoming GNOME 48 release — a typographic turnabout that won’t impact Ubuntu users directly, though. Should you feel a sense of deja vu here it’s because GNOME trialled a font switch last year, during development of GNOME 47. Back then, it replaced its home-grown Cantarell font with the popular open-source sans Inter font (trivia: used by Zorin OS). The change was reverted prior to the GNOME 47 due to various UI quirks, coverage issues, and compatibility (thus underlying the importance of testing things out prior […]

You're reading GNOME Introduces New UI & Monospace Adwaita Fonts, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Try Mozilla’s New AI Detector Add-On for Firefox [OMG! Ubuntu!]

Want to find out if the text you’re reading online was written by an real human or spat out by a large language model (LLM) trying to sound like one? Mozilla’s Fakespot Deepfake Detector Firefox add-on may can help give you an indication. Similar to online AI detector tools, the add-on can analyse text (of 32 words or more) to identify patterns, traits, and tells common in AI generated or manipulated text. It uses Mozilla’s proprietary ApolloDFT engine and a set of open-source detection models. But unlike some tools, Mozilla’s Fakespot Deepfake Detector browser extension is free to use, does […]

You're reading Try Mozilla’s New AI Detector Add-On for Firefox, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

High Tide is a Promising New Linux TIDAL Client [OMG! Ubuntu!]

Linux users hunting for a native client to stream music from TIDAL will want to keep an eye on a promising new open-source app called High Tide. High Tide is an unofficial but native Linux client for the TIDAL music streaming service. It’s written in Python, uses GTK4/libadwaita UI, and leverages official TIDAL APIs for playback. TIDAL, often positioned as the ‘pro-artist music streaming platform’, isn’t as popular as industry titan Spotify (likely because it doesn’t offer a ‘free’ ad-supported tier) but is nonetheless a solid rival to it in terms of features and catalogue breadth. Windows, macOS, Android and […]

You're reading High Tide is a Promising New Linux TIDAL Client, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Thunderbird Email Client Moving to Monthly Feature Drops [OMG! Ubuntu!]

The Thunderbird email client is making its monthly ‘release channel’ builds the default download starting in March. “We’re excited to announce that starting with the 135.0 release in March 2025, the Thunderbird Release channel will be the default download,” Corey Bryant, manager of Thunderbird Release Operations, shares in an update on the project’s discussion hub. Right now, users who visit the Thunderbird website and hit the giant download get the latest Extended Support Release (ESR) build by default. It gets one major feature update a year plus smaller bug fix and security updates issued in-between. The version of Thunderbird Ubuntu […]

You're reading Thunderbird Email Client Moving to Monthly Feature Drops, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Confirmed: Ubuntu Dev Discussions Moving to Matrix [OMG! Ubuntu!]

Ubuntu logo, heart logo, and the Matrix chat platform logoUbuntu’s key developers have agreed to switch to Matrix as the primary platform for real-time development communications involving the distro. From March, Matrix will replace IRC as the place where critical Ubuntu development conversations, requests, meetings, and other vital chatter must take place. Developers asked to ensure they have a presence on the platform so they are reachable. Only the current #ubuntu-devel and #ubuntu-release Libera IRC channels are moving to Matrix, but other Ubuntu development-related channels can choose to move –officially, given some projects were using Matrix over IRC already. As a result, any major requests to/of the key Ubuntu […]

You're reading Confirmed: Ubuntu Dev Discussions Moving to Matrix, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

EuroPython Society: Board Report for January 2025 [Planet Python]

The top priority for the board in January was finishing the hiring of our event manager. We’re super excited to introduce Anežka Müller! Anežka is a freelance event manager and a longtime member of the Czech Python community. She’s a member of the Pyvec board, co-organizes PyLadies courses, PyCon CZ, Brno Pyvo, and Brno Python Pizza. She’ll be working closely with the board and OPS team, mainly managing communication with service providers. Welcome onboard!

Our second priority was onboarding teams. We’re happy that we already have the Programme team in place—they started early and launched the Call for Proposals at the beginning of January. We’ve onboarded a few more teams and are in the process of bringing in the rest.

Our third priority was improving our grant programme in order to support more events with our limited budget and to make it more clear and transparent. We went through past data, came up with a new proposal, discussed it, voted on it, and have already published it on our blog.

Individual reports:

Artur

  • Updating onboarding/offboarding checklists for Volunteers and Board Members
  • Started development of https://github.com/EuroPython/internal-bot
  • Event Manager onboarding
  • Various infrastructure updates including new website deployment and self-hosted previews for Pull Requests to the website.
  • Setting up EPS AWS account.
  • Working out the Grant Guidelines update for 2025
  • Attending PyConWeb and FOSDEM
  • Reviewing updates to the Sponsors setup and packages for 2025
  • More documentation, sharing know-how and reviewing new proposals.

Mia

  • Brand strategy: Analysis of social media posts from previous years and web analytics. Call with a European open-source maintainer and a call with a local events organizer about EP content.
  • Comms & design: Call for proposal announcements, EP 2024 video promotions, speaker mentorship, and newsletter. Video production - gathering videos from speakers, video post-production, and scheduling them on YouTube shorts, and social media.
  • Event management coordination: Calls with the event manager and discussions about previous events.
  • Grants: Work on new grant guidelines and related comms.
  • Team onboarding: Calls with potential comms team members and coordination.
  • PR: Delivering a lightning talk at FOSDEM.

Cyril

  • Offboarding the old board
  • Permission cleanup
  • Team selection
  • Onboarding new team members
  • Administrative work on Grants

Aris

  • Worked on the Grants proposal
  • Teams selection
  • Follow-up with team members
  • Board meetings
  • Financial updates
  • Community outreach: FOSDEM

Ege

  • Working on various infrastructure updates, mostly related to the website.
  • Reviewing Pull Requests for the website and the internal bot
  • Working on the infrastructure team proposal.

Shekhar

  • Timeline: Discussion with the Programme Team, and planning to do the same with the other teams.
  • Visa Request letter: Setup and Test Visa Request Automation for the current year
  • Team selection discussion with past volunteers
  • Board Meetings

Anders

  • ...

Python Morsels: Avoid over-commenting in Python [Planet Python]

When do you need a comment in Python and when should you consider an alternative to commenting?

Documenting instead of commenting

Here is a comment I would not write in my code:

def first_or_none(iterable):
    # Return the first item in given iterable (or None if empty).
    for item in iterable:
        return item
    return None

That comment seems to describe what this code does... so why would I not write it?

I do like that comment, but I would prefer to write it as a docstring instead:

def first_or_none(iterable):
    """Return the first item in given iterable (or None if empty)."""
    for item in iterable:
        return item
    return None

Documentation strings are for conveying the purpose of function, class, or module, typically at a high level. Unlike comments, they can be read by Python's built-in help function:

>>> help(first_or_none)
Help on function first_or_none in module __main__:

first_or_none(iterable)
    Return the first item in given iterable (or None if empty).

Docstrings are also read by other documentation-oriented tools, like Sphinx.

Non-obvious variables and values

Here's a potentially helpful comment:

Read the full article: https://www.pythonmorsels.com/avoid-comments/

EuroPython Society: Changes in the Grants Programme for 2025 [Planet Python]

TL;DR:

  • We are making small changes to the Grant Programme
  • We are increasing transparency and reducing ambiguity in the guidelines.
  • We would like to support more events with our limited budget
  • We’ve introduced caps for events in order to make sure all grants are fairly given and we can support more communities.
  • We’ve set aside 10% of our budget for the local community.

Background:

The EPS introduced a Grant Programme in 2017. Since then, we have granted almost EUR 350k through the programme, partly via EuroPython Finaid and by directly supporting other Python events and projects across Europe. In the last two years, the Grant Programme has grown to EUR 100k per year, with even more requests coming in.

With this growth come new challenges in how to distribute funds fairly so that more events can benefit. Looking at data from the past two years, we’ve often been close to or over our budget. The guidelines haven’t been updated in a while. As grant requests become more complex, we’d like to simplify and clarify the process, and better explain it on our website.

We would also like to acknowledge that EuroPython, when traveling around Europe, has an additional impact on the host country, and we’d like to set aside part of the budget for the local community.

The Grant Programme is also a primary funding source for EuroPython Finaid. To that end, we aim to allocate 30% of the total Grant Programme budget to Finaid, an increase from the previous 25%.

Changes:

  • We’ve updated the text on our website, and split it into multiple sub-pages to make it easier to navigate. The website now includes a checklist of what we would like to see in a grant application, and a checklist for the Grants Workgroup – so that when you apply for the Grant you already know the steps that it will go through later and when you can expect an answer from us.
  • We looked at the data from previous years, and size and timing of the grant requests. With the growing number and size of the grants, to make it more accessible to smaller conferences and conferences happening later in the year, we decided to introduce max caps per grant and split the budget equally between the first and second half of the year. We would also explicitly split the total budget into three categories – 30% goes to the EuroPython finaid, 10% is reserved for projects in the host country. The remaining 60% of the budget goes to fund other Python Conferences. This is similar to the split in previous years, but more explicit and transparent.

Using 2024 data, and the budget available for Community Grants (60% of total), we’ve simulated different budget caps and found a sweet spot at 6000EUR, where we are able to support all the requests with most of the grants being below that limit. For 2025 we expect to receive a similar or bigger number of requests.


2024

6k

5k

4k

3.5

3

Grant #1

€ 4,000.00

€ 4,000.00

€ 4,000.00

€ 4,000.00

€ 3,500.00

€ 3,000.00

Grant #2

€ 8,000.00

€ 6,000.00

€ 5,000.00

€ 4,000.00

€ 3,500.00

€ 3,000.00

Grant #3

€ 4,000.00

€ 4,000.00

€ 4,000.00

€ 4,000.00

€ 3,500.00

€ 3,000.00

Grant #4

€ 5,000.00

€ 5,000.00

€ 5,000.00

€ 4,000.00

€ 3,500.00

€ 3,000.00

Grant #5

€ 10,000.00

€ 6,000.00

€ 5,000.00

€ 4,000.00

€ 3,500.00

€ 3,000.00

Grant #6

€ 4,000.00

€ 4,000.00

€ 4,000.00

€ 4,000.00

€ 3,500.00

€ 3,000.00

Grant #7

€ 1,000.00

€ 1,000.00

€ 1,000.00

€ 1,000.00

€ 1,000.00

€ 1,000.00

Grant #8

€ 5,000.00

€ 5,000.00

€ 5,000.00

€ 4,000.00

€ 3,500.00

€ 3,000.00

Grant #9

€ 6,000.00

€ 6,000.00

€ 5,000.00

€ 4,000.00

€ 3,500.00

€ 3,000.00

Grant #10

€ 2,900.00

€ 2,900.00

€ 2,900.00

€ 2,900.00

€ 2,900.00

€ 2,900.00

Grant #11

€ 2,000.00

€ 2,000.00

€ 2,000.00

€ 2,000.00

€ 2,000.00

€ 2,000.00

Grant #12

€ 3,000.00

€ 3,000.00

€ 3,000.00

€ 3,000.00

€ 3,000.00

€ 3,000.00

Grant #13

€ 450.00

€ 450.00

€ 450.00

€ 450.00

€ 450.00

€ 450.00

Grant #14

€ 3,000.00

€ 3,000.00

€ 3,000.00

€ 3,000.00

€ 3,000.00

€ 3,000.00

Grant #15

€ 1,000.00

€ 1,000.00

€ 1,000.00

€ 1,000.00

€ 1,000.00

€ 1,000.00

Grant #16

€ 2,000.00

€ 2,000.00

€ 2,000.00

€ 2,000.00

€ 2,000.00

€ 2,000.00

Grant #17

€ 3,500.00

€ 3,500.00

€ 3,500.00

€ 3,500.00

€ 3,500.00

€ 3,000.00

Grant #18

€ 1,500.00

€ 1,500.00

€ 1,500.00

€ 1,500.00

€ 1,500.00

€ 1,500.00

SUM

€ 66,350.00

€ 60,350.00

€ 57,350.00

€ 52,350.00

€ 48,350.00

€ 43,850.00


alt

We are introducing a special 10% pool of money to be used on projects in the host country (in 2025 that’s again Czech Republic). This pool is set aside at the beginning of the year, with one caveat that we would like to deploy it in the first half of the year. Whatever is left unused goes back to the Community Pool to be used in second half of the year.

Expected outcome:

  • Fairer Funding: By spreading our grants out during the year, conferences that happen later won’t miss out.
  • Easy to Follow: Clear rules and deadlines cut down on confusion about how much you can get and what it’s for.
  • Better Accountability: We ask for simple post-event reports so we can see where the money went and what impact it made.
  • Stronger Community: Funding more events grows our Python network across Europe, helping everyone learn, connect, and collaborate.

Real Python: Quiz: Python Keywords: An Introduction [Planet Python]

In this quiz, you’ll test your understanding of Python Keywords.

Python keywords are reserved words with specific functions and restrictions in the language. These keywords are always available in Python, which means you don’t need to import them. Understanding how to use them correctly is fundamental for building Python programs.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Zato Blog: Modern REST API Tutorial in Python [Planet Python]

Modern REST API Tutorial in Python

Great APIs don't win theoretical arguments - they just prefer to work reliably and to make developers' lives easier.

Here's a tutorial on what building production APIs is really about: creating interfaces that are practical in usage, while keeping your systems maintainable for years to come.

Sound intriguing? Read the modern REST API tutorial in Python here.

Modern REST API tutorial in Python

More resources

➤ Python API integration tutorials
What is a Network Packet Broker? How to automate networks in Python?
What is an integration platform?
Python Integration platform as a Service (iPaaS)
What is an Enterprise Service Bus (ESB)? What is SOA?
Open-source iPaaS in Python

Kushal Das: pass using stateless OpenPGP command line interface [Planet Python]

Yesterday I wrote about how I am using a different tool for git signing and verification. Next, I replaced my pass usage. I have a small patch to use stateless OpenPGP command line interface (SOP). It is an implementation agonostic standard for handling OpenPGP messages. You can read the whole SPEC here.

Installation

cargo install rsop rsop-oct

And copied the bash script from my repository to the path somewhere.

The rsoct binary from rsop-oct follows the same SOP standard but uses the card to signing/decryption. I stored my public key in ~/.password-store/.gpg-key file, which is in turn used for encryption.

Usage

Here nothing changed related my daily pass usage, except the number of time I am typing my PIN :)

PyCoder’s Weekly: Issue #668: NumPy, Compiling Python 1.0, BytesIO, and More (Feb. 11, 2025) [Planet Python]

#668 – FEBRUARY 11, 2025
View in Browser »

The PyCoder’s Weekly Logo


NumPy Techniques and Practical Examples

In this video course, you’ll learn how to use NumPy by exploring several interesting examples. You’ll read data from a file into an array and analyze structured arrays to perform a reconciliation. You’ll also learn how to quickly chart an analysis & turn a custom function into a vectorized function.
REAL PYTHON course

Let’s Compile Python 1.0

As part of the celebration of 31 years of Python, Bite Code compiles the original Python 1.0 and plays around with it.
BITE CODE!

Postman AI Agent Builder Is Here: The Quickest Way to Build AI Agents. Start Building

alt

Postman AI Agent Builder is a suite of solutions that accelerates agent development. With centralized access to the latest LLMs and APIs from over 18,000 companies, plus no-code workflows, you can quickly connect critical tools and build multi-step agents — all without writing a single line of code →
POSTMAN sponsor

Save Memory With BytesIO

If you want to save memory when reading from a BytesIO object, getvalue() is surprisingly a good choice.
ITAMAR TURNER-TRAURING

Discussions

Python Jobs

Backend Software Engineer (Anywhere)

Brilliant.org

More Python Jobs >>>

Articles & Tutorials

How to Split a String in Python

This tutorial will help you master Python string splitting. You’ll learn to use .split(), .splitlines(), and re.split() to effectively handle whitespace, custom delimiters, and multiline text, which will level up your data parsing skills.
REAL PYTHON

The Mutable Trap: Avoiding Unintended Side Effects in Python

“Ever had a Python function behave strangely, remembering values between calls when it shouldn’t? You’re not alone! This is one of Python’s sneakiest pitfalls—mutable default parameters.”
CRAIG RICHARDS • Shared by Bob

Posit Package Manager: Secure Python Library Management

Python developers use Posit Package Manager to mirror public & internally developed repos within their firewalls. Get reporting on known vulnerabilities to proactively address potential threats. High-security environments can even run air-gapped.
POSIT sponsor

Decorator JITs: Python as a DSL

There are several Just In Time compilation tools out there that allow you to decorate a function to indicate you want it compiled. This article shows you how that works.
ELI BENDERSKY

Better Unit-Tests for Email With Django 5.2

Django 5.2 contains a new helper on the email class to make it easier to write unit-tests validating that your email contains the content you expect it to contain.
MEDIUM.COM/AMBIENT-INNOVATION • Shared by Ronny Vedrilla

Rendering Form Fields as Group in Django

Django 5.0 added the concept of field groups which make it easier to customize the layout of Django forms. This article covers what groups are and how to use them.
VALENTINO GAGLIARDI

Developer Philosophy

The author was recently invited with other senior devs to give a lightning talk on their personal development philosophy. This post captures those thoughts.
QNTM

Interrupting Scripts Without Tracebacks

This Things-I’ve-Learned post talks about how you can suppress the KeyboardInterrupt expression so your program doesn’t exit with a traceback.
RODRIGO GIRÃO SERRÃO

PEP 772: Packaging Governance Process

This PEP proposes a Python Packaging Council with broad authority over packaging standards, tools, and implementations.
PYTHON.ORG

Python Terminology: An Unofficial Glossary

“Definitions for colloquial Python terminology (effectively an unofficial version of the Python glossary).”
TREY HUNNER

Projects & Code

Events

Python Atlanta

February 14, 2025
MEETUP.COM

Python Barcamp Karlsruhe 2025

February 15 to February 17, 2025
BARCAMPS.EU

PyData Bristol Meetup

February 20, 2025
MEETUP.COM

DjangoCongress JP 2025

February 22 to February 23, 2025
DJANGOCONGRESS.JP

PyConf Hyderabad 2025

February 22 to February 24, 2025
PYCONFHYD.ORG


Happy Pythoning!
This was PyCoder’s Weekly Issue #668.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Python Insider: Python 3.14.0 alpha 5 is out [Planet Python]

Here comes the antepenultimate alpha.

https://www.python.org/downloads/release/python-3140a5/

This is an early developer preview of Python 3.14

Major new features of the 3.14 series, compared to 3.13

Python 3.14 is still in development. This release, 3.14.0a5, is the fifth of seven planned alpha releases.

Alpha releases are intended to make it easier to test the current state of new features and bug fixes and to test the release process.

During the alpha phase, features may be added up until the start of the beta phase (2025-05-06) and, if necessary, may be modified or deleted up until the release candidate phase (2025-07-22). Please keep in mind that this is a preview release and its use is not recommended for production environments.

Many new features for Python 3.14 are still being planned and written. Among the new major new features and changes so far:

The next pre-release of Python 3.14 will be the penultimate alpha, 3.14.0a6, currently scheduled for 2025-03-14.

More resources

And now for something completely different

2025-01-29 marked the start of a new lunar year, the Year of the Snake 🐍 (and the Year of Python?).

For centuries, π was often approximated as 3 in China. Some time between the years 1 and 5 CE, astronomer, librarian, mathematician and politician Liu Xin (劉歆) calculated π as 3.154.

Around 130 CE, mathematician, astronomer, and geographer Zhang Heng (張衡, 78–139) compared the celestial circle with the diameter of the earth as 736:232 to get 3.1724. He also came up with a formula for the ratio between a cube and inscribed sphere as 8:5, implying the ratio of a square’s area to an inscribed circle is √8:√5. From this, he calculated π as √10 (~3.162).

Third century mathematician Liu Hui (刘徽) came up with an algorithm for calculating π iteratively: calculate the area of a polygon inscribed in a circle, then as the number of sides of the polygon is increased, the area becomes closer to that of the circle, from which you can approximate π.

This algorithm is similar to the method used by Archimedes in the 3rd century BCE and Ludolph van Ceulen in the 16th century CE (see 3.14.0a2 release notes), but Archimedes only went up to a 96-sided polygon (96-gon). Liu Hui went up to a 192-gon to approximate π as 157/50 (3.14) and later a 3072-gon for 3.14159.

Liu Hu wrote a commentary on the book The Nine Chapters on the Mathematical Art which included his π approximations.

In the fifth century, astronomer, inventor, mathematician, politician, and writer Zu Chongzhi (祖沖之, 429–500) used Liu Hui’s algorithm to inscribe a 12,288-gon to compute π between 3.1415926 and 3.1415927, correct to seven decimal places. This was more accurate than Hellenistic calculations and wouldn’t be improved upon for 900 years.

Happy Year of the Snake!

Enjoy the new release

Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organisation contributions to the Python Software Foundation.

Regards from a remarkably snowless Helsinki,

Your release team,
Hugo van Kemenade
Ned Deily
Steve Dower
Łukasz Langa

Real Python: Building a Python Command-Line To-Do App With Typer [Planet Python]

Building an application to manage your to-do list can be an interesting project when you’re learning a new programming language or trying to take your skills to the next level. In this video course, you’ll build a functional to-do application for the command line using Python and Typer, which is a relatively young library for creating powerful command-line interface (CLI) applications in almost no time.

With a project like this, you’ll apply a wide set of core programming skills while building a real-world application with real features and requirements.

In this video course, you’ll learn how to:

  • Build a functional to-do application with a Typer CLI in Python
  • Use Typer to add commands, arguments, and options to your to-do app
  • Test your Python to-do application with Typer’s CliRunner and pytest

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Kushal Das: Using openpgp-card-tool-git with git [Planet Python]

One of the power of Unix systems comes from the various small tools and how they work together. One such new tool I am using for some time is for git signing & verification using OpenPGP and my Yubikey for the actual signing operation via openpgp-card-tool-git. I replaced the standard gpg for this usecase with the oct-git command from this project.

Installation & configuration

cargo install openpgp-card-tool-git

Then you will have to configuration your (in my case the global configuration) git configuration.

git config --global gpg.program <path to oct-git>

I am assuming that you already had it configured before for signing, otherwise you have to run the following two commands too.

git config --global commit.gpgsign true
git config --global tag.gpgsign true

Usage

Before you start using it, you want to save the pin in your system keyring.

Use the following command.

oct-git --store-card-pin

That is it, now your git commit will sign the commits using oct-git tool.

In the next blog post I will show how to use the other tools from the author for various different OpenPGP oeprations.

Seth Michael Larson: Building software for connection (#2: Consensus) [Planet Python]

This is the second article in a series about “software for connection”.

In the previous article we concluded that a persistent always-on internet connection isn't required for software to elicit feelings of connection between humans.

Building on this conclusion: let's explore how Animal Crossing software was able to intercommunicate without requiring a centralized server and infrastructure and the trade-offs for these design decisions.


Image of Tom Nook from an Animal Crossing online contest (Nookipedia)

Distributing digital goods without the internet

Animal Crossing has over 1,000 unique items that need to be collected for a complete catalog, including furniture, wallpapers, clothing, parasols, and carpets. Many of these items are quite rare or were only programmed to be accessible through an official Nintendo-affiliated distribution such as a magazine or online contest.

Beyond official distributions, it's clear Animal Crossings' designer, Katsuya Eguchi, wanted players to cooperate to complete their catalogs. The game incentivized trading items between towns by assigning one “native fruit” (Apple, Orange, Cherry, Peach, or Pear) and randomly making a subset of items harder to find than others depending on a hidden “item group” variable (either A, B, or C).

Items could be exchanged between players when one player visits another town, but this required physically bringing your memory card to another players' GameCube. The GameCube might have come with a handle, but the 'cube wasn't exactly a portable console. Sharing a physical space isn't something you can do with everyone or on a regular basis.

So what did Katsuya Eguchi design for Animal Crossing? To allow for item distributions from magazines and contests and to make player-to-player item sharing easier Animal Crossing included a feature called “secret codes”.

This feature worked by allowing players to exchange 28-character codes with Tom Nook for items. Players could also generate codes for their friends to “send” an item from their own game to a different town. Codes could be shared by writing them on a paper note, instant message, or text message.


Huntr R. explaining how “secret codes” are implemented. A surprising amount of cryptography!

The forgotten durability of offline software

This Reddit comment thread from the GameCube subreddit was the initial inspiration for this entire series. The post is about someone's niece who just started playing Animal Crossing for the first time. The Redditor asked folks to send items to their nieces' town using the secret code system.

This ended up surprising many folks that this system still worked in a game that was over 23 years old! For reference, Nintendo Wi-Fi Connection and Nintendo Network were only available for 8 and 13 years respectively. Below are a handful of the comments from the thread:

  • “That's still online???”
  • “It was online???!”
  • “For real does this still work lol?”
  • “...Was it ever online?”


secret code for my favorite Animal Crossing NES game Wario's Woods:

Xvl5HeG&C9prXu
IWhuzBinlVlqOg

It's hard not to take these comments as indicators that something is very wrong with internet-connected software today. What had to go wrong for a system continuing to work to be met with surprise? Many consumers' experience with software products today is that they become useless e-waste after some far-away service is discontinued a few years after purchase.

My intuition from this is that software that requires centralized servers and infrastructure to function will have shorter lifetimes than software which is offline or only opportunistically uses online functionality.

I don't think this is particularly insightful, more dependencies always means less resilience. But if we're building software for human connection then the software should optimally only be limited by the availability of humans to connect.

What is centralization good for?

CODETYPE
CODETYPE
HIT%
HIT%
S?
S?
NPC CODE
NPC CODE
PLAYER NAME
PLAYER NAME
ITEM NUMBER
ITEM NUMBER
TOWN NAME
TOWN NAME
CHKSM
CHKSM
1 byte
1 byte
20 bytes total
20 bytes t...Text is not SVG - cannot display
Data layout of secret codes before being encrypted (Animal Crossing decompilation project)

Animal Crossings' secret code system is far from perfect. The system is easily abusable, as the same secret codes can be reused over-and-over by the same user to duplicate items without ever expiring. The only limit was that 3 codes could be used per day.

Secret codes are tied to a specific town and recipient name, but even this stopgap can be defeated by setting your name and town name to specific values to share codes across many different players.

Not long after Animal Crossing's release the secret code algorithm was reverse-engineered so secret codes for any item could be created for any town and recipient name as if they came from an official Nintendo distribution. This was possible because the secret code system relied on "security through obscurity".

Could centralization be the answer to preventing these abuses?

The most interesting property that a centralized authority approach provides is global consensus: forcing everyone to play by the same rules. By storing the “single source-of-truth” a central authority is able to prevent abuses like the ones mentioned above.

For example, a centralized “secret code issuing server” could generate new unique codes per-use and check each code's validity against a database to prevent users from generating their own illegitimate codes or codes being re-used multiple times.

The problem with centralized consensus is it tends to be viral to cover the entire software state. A centralized server can generate codes perfectly, but how can that same server know that the items you're exchanging for codes were obtained legitimately? To know this the server would also need to track item legitimacy, leading to software which requires an internet connection to operate.

This is optimal from a correctness perspective, but as was noted earlier, I suspect that if such a server was a mandatory part of the secret code system in Animal Crossing that the system would likely not be usable today.

This seems like a trade-off, which future would you rather have?

Redesigning Animal Crossing secret codes

If I were designing Animal Crossings' secret code system with modern hardware, what would it look like? How can we keep the offline fall-back while providing consensus and being less abusable, especially for official distributions.

I would likely use a public-key cryptographic system for official distributions, embedding a certificate that could be used to “verify” that specific secret codes originated from the expected centralized entity. Codes that are accepted would be recorded to prevent reusing the same code multiple times in the same town. Using public-key cryptography prevents the system from being reverse-engineered to distribute arbitrary items until the certificate private key was cracked.

For sharing items between players I would implement a system where each town generated a public and private key and the public key was shared to other towns whenever the software was able to, such as when a player visited the other town. Players would only be able to send items to players that they have visited (which for Animal Crossing required physical presence, more on this later!)

Each sender could store a nonce value for each potential recipient. Embedding that nonce into the secret code would allow the recipients' software to verify that the specific code hadn't been used yet. The nonce wouldn't have to be long to avoid simple reusing of codes.

Both above systems would require much more data to be embedded into each “secret code” compared to the 28-character codes from the GameCube. For this I would use QR codes to embed over 2KB of data into a single QR code. Funnily enough, Animal Crossing New Leaf and onwards use QR code technology for players to share design patterns.

This design is still abusable if users can modify their software or hardware but doesn't suffer from the trivial-to-exploit flaws of Animal Crossing's secret code system.

Decentralized global consensus?

What if we could have the best of both worlds: we want consensus that is both global and decentralized. At least today, we are out of luck.

Decentralized global consensus is technologically feasible, but the existing solutions (mostly blockchains) are expensive (both in energy and capital) and can't handle throughput on any sort of meaningful scale.

Efficient
Efficient
Decentralized
Decentralized
Global
Global
Impossible?
Impossible?Text is not SVG - cannot display
Pick two: Decentralized, Global, and Efficient

There are many other decentralized consensus systems that are able to form “pockets” of useful peer-to-peer consensus using a fraction of the resources, such as email, BitTorrent, ActivityPub, and Nostr. These systems are only possible by adding some centralization or by only guaranteeing local consensus.

When is global consensus needed?

Obviously global consensus is important for certain classes of software like financial, civics, and infrastructure, but I wonder how the necessity of consensus in software changes for software with different risk profiles.

For software which has fewer risks associated with misuse is there as much need for global consensus? How can software for connection be designed to reduce risk and require less consensus to be effective? If global consensus and centralized servers become unnecessary, can we expect software for connection to be usable on much longer timescales, essentially for as long as there are users?

Quansight Labs Blog: PEP 517 build system popularity [Planet Python]

Analysis of PEP 517 build backends used in 8000 top PyPI packages

Leveraging Tmux and Screen for Advanced Session Management [Linux Journal - The Original Magazine of the Linux Community]

Leveraging Tmux and Screen for Advanced Session Management

Introduction

In the realm of Linux, efficiency and productivity are not just goals but necessities. One of the most powerful tools in a power user's arsenal are terminal multiplexers, specifically tmux and Screen. These tools enhance the command line interface experience by allowing users to run multiple terminal sessions within a single window, detach them and continue working in the background, and reattach them at will. This guide delves into the world of tmux and Screen, showing you how to harness their capabilities to streamline your workflow and boost your productivity.

Understanding Terminal Multiplexers

What is a Terminal Multiplexer?

A terminal multiplexer is a software application that allows multiple terminal sessions to be accessed and controlled from a single screen. Users can switch between these sessions seamlessly, without the need to open multiple terminal windows. This capability is particularly useful in remote session management, where sessions need to remain active even when the user is disconnected.

Key Features and Benefits
  • Session Management: Keep processes running even after disconnecting.
  • Window Splitting: Divide your screen into multiple windows.
  • Persistent Sessions: Reconnect to sessions after disconnection without losing state.
  • Multiple Views: View different sessions side-by-side.

Getting Started with Screen

Brief History and Development

Screen, developed by GNU, has been a staple among system administrators and power users for decades. It provides the basic functionality needed to manage multiple windows in a single session.

Installing Screen

To install Screen on Ubuntu or Debian:

sudo apt-get install screen

On Red Hat or CentOS:

sudo yum install screen

On Fedora:

sudo dnf install screen

Enhancing System Security and Efficiency through User and Group Management [Linux Journal - The Original Magazine of the Linux Community]

Enhancing System Security and Efficiency through User and Group Management

Introduction

Linux, a powerhouse in the world of operating systems, is renowned for its robustness, security, and scalability. Central to these strengths is the effective management of users and groups, which ensures secure and efficient access to system resources. This guide delves into the intricacies of user and group management, providing a foundation for both newcomers and seasoned administrators to enhance their Linux system administration skills.

Understanding Users in Linux

In Linux, a user is anyone who interacts with the operating system, be it a human or a software agent. Users can be categorized into three types:

  1. Root User: Also known as the superuser, the root user has unfettered access to the system. This account can modify any file, run privileged commands, and has administrative rights over other user accounts.

  2. System Users: These accounts are created to run specific services such as web servers or database systems. Typically, these users do not have login capabilities and are used to segregate duties for security purposes.

  3. Regular Users: These are the typical accounts created for actual people using the system. They have more limited privileges compared to the root user, which can be adjusted through group memberships or permission changes.

Each user is uniquely identified by a User ID (UID). The UID for the root user is always 0, while UIDs for other users usually start from 1000 upwards by default.

Understanding Groups in Linux

A group in Linux is a collection of users who share certain privileges and access rights. Groups make it easier to manage permissions for a collection of users, rather than having to assign permissions individually.

  • Primary Group: When a user is created, they are automatically assigned a primary group. This group is typically named after the username and is used for setting the default permissions when the user creates new files or directories.
  • Secondary Groups: Users can be added to additional groups, allowing them more granular access to resources.

Groups are identified by a Group ID (GID), similar to how users are identified by UIDs.

User and Group Management Tools

Linux offers a suite of command-line tools for managing users and groups:

09-02-2025

20:20

What Do Linux Kernel Developers Think of Rust? [Slashdot: Linux]

Keynotes at this year's FOSDEM included free AI models and systemd, reports Heise.de — and also a progress report from Miguel Ojeda, supervisor of the Rust integration in the Linux kernel. Only eight people remain in the core team around Rust for Linux... Miguel Ojeda therefore launched a survey among kernel developers, including those outside the Rust community, and presented some of the more important voices in his FOSDEM talk. The overall mood towards Rust remains favorable, especially as Linus Torvalds and Greg Kroah-Hartman are convinced of the necessity of Rust integration. This is less about rapid progress and more about finding new talent for kernel development in the future. The reaction was mostly positive, judging by Ojeda's slides: - "2025 will be the year of Rust GPU drivers..." — Daniel Almedia - "I think the introduction of Rust in the kernel is one of the most exciting development experiments we've seen in a long time." — Andrea Righi - "[T]he project faces unique challenges. Rust's biggest weakness, as a language, is that relatively few people speak it. Indeed, Rust is not a language for beginners, and systems-level development complicates things even more. That said, the Linux kernel project has historically attracted developers who love challenging software — if there's an open source group willing to put the extra effort for a better OS, it's the kernel devs." — Carlos Bilbao - "I played a little with [Rust] in user space, and I just absolutely hate the cargo concept... I hate having to pull down other code that I do not trust. At least with shared libraries, I can trust a third party to have done the build and all that... [While Rust should continue to grow in the kernel], if a subset of C becomes as safe as Rust, it may make Rust obsolete..." Steven Rostedt Rostedt wasn't sure if Rust would attract more kernel contributors, but did venture this opinion. "I feel Rust is more of a language that younger developers want to learn, and C is their dad's language." But still "contention exists within the kernel development community between those pro-Rust and -C camps," argues The New Stack, citing the latest remarks from kernel maintainer Christoph Hellwig (who had earlier likened the mixing of Rust and C to cancer). Three days later Hellwig reiterated his position again on the Linux kernel mailing list: "Every additional bit that another language creeps in drastically reduces the maintainability of the kernel as an integrated project. The only reason Linux managed to survive so long is by not having internal boundaries, and adding another language completely breaks this. You might not like my answer, but I will do everything I can do to stop this. This is NOT because I hate Rust. While not my favourite language it's definitively one of the best new ones and I encourage people to use it for new projects where it fits. I do not want it anywhere near a huge C code base that I need to maintain." But the article also notes that Google "has been a staunch supporter of adding Rust to the kernel for Linux running in its Android phones." The use of Rust in the kernel is seen as a way to avoid memory vulnerabilities associated with C and C++ code and to add more stability to the Android OS. "Google's wanting to replace C code with Rust represents a small piece of the kernel but it would have a huge impact since we are talking about billions of phones," Ojeda told me after his talk. In addition to Google, Rust adoption and enthusiasm for it is increasing as Rust gets more architectural support and as "maintainers become more comfortable with it," Ojeda told me. "Maintainers have already told me that if they could, then they would start writing Rust now," Ojeda said. "If they could drop C, they would do it...." Amid the controversy, there has been a steady stream of vocal support for Ojeda. Much of his discussion also covered statements given by advocates for Rust in the kernel, ranging from lead developers of the kernel and including Linux creator Linus Torvalds himself to technology leads from Red Hat, Samsung, Google, Microsoft and others.

Read more of this story at Slashdot.

08-02-2025

20:47

Mixing Rust and C in Linux Likened To Cancer By Kernel Maintainer [Slashdot: Linux]

A heated dispute has erupted in the Linux kernel community over the integration of Rust code, with kernel maintainer Christoph Hellwig likening multiple programming languages to "cancer" for the project's maintainability. The conflict centers on a proposed patch enabling Rust-written device drivers to access the kernel's DMA API, which Hellwig strongly opposed. While the dispute isn't about Rust itself, Hellwig argues that maintaining cross-language codebases severely compromises Linux's integrated nature. From a report: "Don't force me to deal with your shiny language of the day," he [Hellwig] wrote. "Maintaining multi-language projects is a pain I have no interest in dealing with. If you want to use something that's not C, be that assembly or Rust, you write to C interfaces and deal with the impedance mismatch yourself as far as I'm concerned." This resistance follows the September departure of Microsoft engineer Wedson Almeida Filho from the Rust for Linux project, citing "nontechnical nonsense."

Read more of this story at Slashdot.

LibreOffice 25.2, the office suite that meets today’s user needs [Press Releases Archives - The Document Foundation Blog]

The new major release provides many user interface and accessibility improvements, plus the usual interoperability features

Berlin, 6 February 2025 – LibreOffice 25.2, the new major release of the free, volunteer-supported office suite for Windows (Intel, AMD and ARM), macOS (Apple Silicon and Intel) and Linux is available on our download page. LibreOffice is the best office suite for users who want to retain control over their individual software and documents, thereby protecting their privacy and digital life from the commercial interference and the lock-in strategies of Big Tech.

LibreOffice is the only office suite designed to meet the actual needs of the user – not just their eyes. It offers a range of interface options to suit different user habits, from traditional to modern, and makes the most of different screen sizes, optimising the space available to put the maximum number of features just a click or two away.

It is also the only software for creating documents (that may contain personal or confidential information) that respects the user’s privacy, ensuring that the user can decide if and with whom to share the content they create, thanks to the standard and open format that is not used as a lock-in tool, forcing periodic software updates. All this with a feature set that is comparable to the leading software on the market and far superior to that of any competitor.

What makes LibreOffice unique is the LibreOffice Technology Platform, the only one on the market that allows the consistent development of desktop, mobile and cloud versions – including those provided by companies in the ecosystem – capable of producing identical and fully interoperable documents based on the two available ISO standards: the open ODF or Open Document Format (ODT, ODS and ODP) and the proprietary Microsoft OOXML (DOCX, XLSX and PPTX). The latter hides a huge number of artificial (and unnecessary) lock-in complexities that create problems for users convinced they are using a standard format.

End users can get first-level technical support from volunteers on the user mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org. LibreOffice Writer Guide can be downloaded from https://books.libreoffice.org/en/.

New Features of LibreOffice 25.2

PRIVACY

  • LibreOffice can remove all personal information associated to any document (author names and timestamps, editing time, printer name and configuration, document template, author and date for comments and tracked changes).

CORE/GENERAL

  • LibreOffice 25.2 can read and write ODF version 1.4.
  • Many interoperability improvements with proprietary OOXML documents.
  • It is now possible to automatically sign documents after defining a default certificate.
  • Windows 7 and 8/8.1 are deprecated platforms, and support will be removed in version 25.8.
  • Extensions and features relying on Python will not work on Windows 7.

WRITER

  • Improvements to Track Changes management, to manage large number of changes in long documents.
  • Comments are now tracked in the Navigator when you move the focus into comments, while resizing the area containing comments now shows a visual guide.
  • Added options to set a default zoom level for opening documents, overriding the level stored in documents.
  • It is now possible to delete all content of a content type (excluding headings) via the Navigator.

CALC

  • Addition of a “Handle Duplicate Records” dialog to select/remove duplicate records in Calc.
  • Both the Function Wizard dialog and Functions Sidebar deck received improvements to searching and user experience.
  • Solver models can be saved into spreadsheets and Solver is able to provide a sensitivity analysis report.
  • Addition of new sheet protection options related to Pivot Tables, Pivot Charts and AutoFilters.

IMPRESS & DRAW

  • Many improvements to all Impress templates, which now have visible elements (font colour set to black) in Master Notes and Handout.
  • Objects can be centred on the Impress slide (or Draw page) in one single step.
  • Automatic repeating of slides can now be activated in windowed mode.
  • Overflowing text in presenter notes is no longer cut off when printing.

USER INTERFACE

  • The list of recently used files has now a checkbox “[x] Current Module Only” that allows to filter the list.
  • Object boundaries are now toggled independently of Formatting Marks.
  • The colour of non-printing characters and the background colour of comments can be customised.
  • Default items for unordered lists (also known as bullets) have been updated.
  • Significant improvements to application themes.

ACCESSIBILITY

  • Improved warning and error levels in the Accessibility Sidebar, with option to ignore warnings.
  • User interface elements report an accessible identifier which can be used by assistive technologies.
  • Windows: accessibility gets enabled whenever a tool queries information on the accessibility level, and accessible relations are correctly reported.
  • Linux: positions of UI elements (including on Wayland) are correctly reported on the accessibility level.

SCRIPTFORGE LIBRARIES

  • An extensible and robust collection of macro scripting resources to be invoked from user Basic or Python scripts.
  • The whole set of services (except when the native native built-in function is better) is made available for Python scripts with identical syntax and behaviour as in Basic.
  • The English documentation of ScriptForge libraries is now partially integrated in the LibreOffice help pages.

Contributors to LibreOffice 25.2

A total of 176 developers contributed to the new features in LibreOffice 25.2: 47% of the code commits came from 50 developers employed by ecosystem companies – Collabora and allotropia – and other organisations, 31% from seven developers at The Document Foundation, and the remaining 22% from 119 individual volunteer developers.

An additional 189 volunteers have committed 771,263 localized strings in 160 languages, representing hundreds of people working on translations. LibreOffice 25.2 is available in 120 languages, more than any other desktop software, making it available to over 5.5 billion people in their native language. In addition, over 2.4 billion people speak one of these 120 languages as a second language.

LibreOffice for Enterprises

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: www.libreoffice.org/download/libreoffice-in-business/.

Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and improves the LibreOffice Technology platform. Products based on LibreOffice Technology are available for all major desktop operating systems (Windows, macOS, Linux and ChromeOS), mobile platforms (Android and iOS) and the cloud.

Migrations to LibreOffice

The Document Foundation publishes a migration protocol to help companies move from proprietary office suites to LibreOffice, based on the deployment of an LTS (long-term support) enterprise-optimised version of LibreOffice, plus migration consulting and training provided by certified professionals who offer value-added solutions consistent with proprietary offerings. Reference: www.libreoffice.org/get-help/professional-support/.

In fact, LibreOffice’s mature code base, rich feature set, strong support for open standards, excellent compatibility and LTS options from certified partners make it the ideal solution for organisations looking to regain control of their data and break free from vendor lock-in.

Availability of LibreOffice 25.2

LibreOffice 25.2 is available at www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple MacOS 10.15. LibreOffice Technology-based products for Android and iOS are listed here: www.libreoffice.org/download/android-and-ios/.

For users who don’t need the latest features and prefer a version that has undergone more testing and bug fixing, The Document Foundation still maintains the LibreOffice 24.8 family, which includes several months of back-ported fixes. The current release is LibreOffice 24.8.4.

LibreOffice users, free software advocates and community members can support The Document Foundation with a donation at www.libreoffice.org/donate.

[1] Release Notes: wiki.documentfoundation.org/ReleaseNotes/25.2

LibreOffice 24.8.4, optimised for the privacy-conscious user, is available for download [Press Releases Archives - The Document Foundation Blog]

Berlin, 19 December 2024 – LibreOffice 24.8.4, the fourth minor release of the LibreOffice 24.8 family of the free open source, volunteer-supported office suite for Windows (Intel, AMD and ARM), MacOS (Apple and Intel) and Linux, is available at www.libreoffice.org/download.

The release includes over 55 bug and regression fixes over LibreOffice 24.8.3 [1] to improve the stability and robustness of the software, as well as interoperability with legacy and proprietary document formats.

LibreOffice is the only office suite that respects the privacy of the user, ensuring that the user is able to decide if and with whom to share the content they create. It even allows deleting user related info from documents. As such, LibreOffice is the best option for the privacy-conscious office suite user, while offering a feature set comparable to the leading product on the market.

Also, LibreOffice offers a range of interface options to suit different user habits, from traditional to modern, and makes the most of different screen sizes by using all the space available on the desktop to put the maximum number of features just a click or two away.

The biggest advantage over competing products is the LibreOffice Technology engine, the single software platform on which desktop, mobile and cloud versions of LibreOffice – including those from ecosystem companies – are based.

This allows LibreOffice to produce identical and fully interoperable documents based on two ISO standards: the open and neutral Open Document Format (ODT, ODS, ODP) and the closed and fully proprietary Microsoft OOXML (DOCX, XLSX, PPTX), which hides a large amount of artificial complexity, and can cause problems for users who are confident that they are using a true open standard.

End users looking for support can download the LibreOffice 24.8 Getting Started, Writer, Impress, Draw and Math guides from the following link: books.libreoffice.org/. In addition, they can get first-level technical support from volunteers on mailing lists and the Ask LibreOffice website: ask.libreoffice.org.

LibreOffice for Enterprise

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with three or five year backporting of security patches, other dedicated value-added features and Service Level Agreements: www.libreoffice.org/download/libreoffice-in-business/.

Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and improves the LibreOffice Technology platform. Products based on LibreOffice Technology are available for all major desktop operating systems (Windows, macOS, Linux and ChromeOS), mobile platforms (Android and iOS) and the cloud.

The Document Foundation’s migration protocol helps companies move from proprietary office suites to LibreOffice, by installing the LTS (long-term support) enterprise-optimised version of LibreOffice, plus consulting and training provided by certified professionals: www.libreoffice.org/get-help/professional-support/.

In fact, LibreOffice’s mature code base, rich feature set, strong support for open standards, excellent compatibility and LTS options make it the ideal solution for organisations looking to regain control of their data and break free from vendor lock-in.

LibreOffice 24.8.4 availability

LibreOffice 24.8.4 is available from www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 (no longer supported by Microsoft) and Apple MacOS 10.15. Products for Android and iOS are at www.libreoffice.org/download/android-and-ios/.

Users of the LibreOffice 24.2 branch (the last update being 24.2.7), which has recently reached end-of-life, should consider upgrading to LibreOffice 24.8.4, as this is already the most tested version of the program. Early February will see the announcement of LibreOffice 25.2.

LibreOffice users, free software advocates and community members can support The Document Foundation by donating at www.libreoffice.org/donate.

Enterprise deploying LibreOffice can also donate, although the best solution for their needs would be to look for the enterprise optimized versions of the software (with Long Term Support for security and Service Level Agreements to protect their investment) at www.libreoffice.org/download/libreoffice-in-business/.

[1] Fixes in RC1: wiki.documentfoundation.org/Releases/24.8.4/RC1. Fixes in RC2: wiki.documentfoundation.org/Releases/24.8.4/RC2.

Announcement of LibreOffice 24.8.3, the office suite optimised for the privacy-conscious office suite user who wants full control over the information they share [Press Releases Archives - The Document Foundation Blog]

Berlin, 14 November 2024 – LibreOffice 24.8.3, the third minor release of the LibreOffice 24.8 family of the free open source, volunteer-supported office suite for Windows (Intel, AMD and ARM), MacOS (Apple and Intel) and Linux, is available at www.libreoffice.org/download.

The release includes over 80 bug and regression fixes over LibreOffice 24.8.2 [1] to improve the stability and robustness of the software, as well as interoperability with legacy and proprietary document formats. In addition, support for Visio template format VSTX has been added.

LibreOffice is the only office suite that respects the privacy of the user, ensuring that the user is able to decide if and with whom to share the content they create. It even allows deleting user related info from documents. As such, LibreOffice is the best option for the privacy-conscious office suite user, while offering a feature set comparable to the leading product on the market.

Also, LibreOffice offers a range of interface options to suit different user habits, from traditional to modern, and makes the most of different screen sizes by using all the space available on the desktop to put the maximum number of features just a click or two away.

The biggest advantage over competing products is the LibreOffice Technology engine, the single software platform on which desktop, mobile and cloud versions of LibreOffice – including those from ecosystem companies – are based.

This allows LibreOffice to produce identical and fully interoperable documents based on the two ISO standards: the Open Document Format (ODT, ODS, ODP) and the fully proprietary Microsoft OOXML (DOCX, XLSX, PPTX), which hides a large amount of artificial complexity, and can cause problems for users who are confident that they are using a true open standard.

End users looking for support can download the LibreOffice 24.8 Getting Started, Writer and Impress guides from the following link: /books.libreoffice.org/. In addition, they will be able to get first-level technical support from volunteers on mailing lists and the Ask LibreOffice website: ask.libreoffice.org.

LibreOffice for Enterprise

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with three or five year backporting of security patches, other dedicated value-added features and Service Level Agreements: www.libreoffice.org/download/libreoffice-in-business/.

Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and improves the LibreOffice Technology platform. Products based on LibreOffice Technology are available for all major desktop operating systems (Windows, macOS, Linux and ChromeOS), mobile platforms (Android and iOS) and the cloud.

The Document Foundation’s migration protocol helps companies move from proprietary office suites to LibreOffice, by installing the LTS (long-term support) enterprise-optimised version of LibreOffice, plus consulting and training provided by certified professionals: www.libreoffice.org/get-help/professional-support/.

In fact, LibreOffice’s mature code base, rich feature set, strong support for open standards, excellent compatibility and LTS options make it the ideal solution for organisations looking to regain control of their data and break free from vendor lock-in.

LibreOffice 24.8.3 availability

LibreOffice 24.8.3 is available from www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 (no longer supported by Microsoft) and Apple macOS 10.15. Products for Android and iOS are at www.libreoffice.org/download/android-and-ios/.

LibreOffice users, free software advocates and community members can support The Document Foundation by donating at www.libreoffice.org/donate.

Enterprise deploying LibreOffice can also donate, although the best solution for their needs would be to look for the enterprise optimized versions of the software (with Long Term Support for security and Service Level Agreements to protect their investment) at www.libreoffice.org/download/libreoffice-in-business/.

[1] Fixes in RC1: wiki.documentfoundation.org/Releases/24.8.3/RC1. Fixes in RC2: wiki.documentfoundation.org/Releases/24.8.3/RC2.

06-02-2025

12:44

'I'm Done With Ubuntu' [Slashdot: Linux]

Software developer and prolific blogger Herman Ounapuu, writing in a blog post: I liked Ubuntu. For a very long time, it was the sensible default option. Around 2016, I used the Ubuntu GNOME flavor, and after they ditched the Unity desktop environment, GNOME became the default option. I was really happy with it, both for work and personal computing needs. Estonian ID card software was also officially supported on Ubuntu, which made Ubuntu a good choice for family members. But then something changed. Ounapuu recounts how Ubuntu's bi-annual long-term support releases consistently broke functionality, from minor interface glitches to catastrophic system failures that left computers unresponsive. His breaking point came after multiple problematic upgrades affecting family members' computers, including one that rendered a laptop completely unusable during an upgrade from Ubuntu 20.04 to 22.04. Another incident left a relative's system with broken Firefox shortcuts and duplicate status bar icons after updating Lubuntu 18.04. Canonical's aggressive push of Snap packages has drawn particular criticism. The forced migration of system components from traditional Debian packages to Snaps resulted in compatibility issues, broken desktop shortcuts, and government ID card authentication failures. In one instance, he writes, a Snap-related bug in the GNOME desktop environment severely disrupted workplace productivity, requiring multiple system restarts to resolve. The author has since switched to Fedora, praising its implementation of Flatpak as a superior alternative to Snaps.

Read more of this story at Slashdot.

Red Hat Plans to Add AI to Fedora and GNOME [Slashdot: Linux]

In his post about the future of Fedora Workstation, Christian F.K. Schaller discusses how the Red Hat team plans to integrate AI with IBM's open-source Granite engine to enhance developer tools, such as IDEs, and create an AI-powered Code Assistant. He says the team is also working on streamlining AI acceleration in Toolbx and ensuring Fedora users have access to tools like RamaLama. From the post: One big item on our list for the year is looking at ways Fedora Workstation can make use of artificial intelligence. Thanks to IBMs Granite effort we know have an AI engine that is available under proper open source licensing terms and which can be extended for many different usecases. Also the IBM Granite team has an aggressive plan for releasing updated versions of Granite, incorporating new features of special interest to developers, like making Granite a great engine to power IDEs and similar tools. We been brainstorming various ideas in the team for how we can make use of AI to provide improved or new features to users of GNOME and Fedora Workstation. This includes making sure Fedora Workstation users have access to great tools like RamaLama, that we make sure setting up accelerated AI inside Toolbx is simple, that we offer a good Code Assistant based on Granite and that we come up with other cool integration points. "I'm still not sure how I feel about this approach," writes designer/developer and blogger, Bradley Taunt. "While IBM Granite is an open source model, I still don't enjoy so much artificial 'intelligence' creeping into core OS development. This also isn't something optional on the end-users side, like a desktop feature or package. This sounds like it's going to be built directly into the core system." "Red Hat has been pushing hard towards AI and my main concern is having this influence other operating system dev teams. Luckily things seems AI-free in BSD land. For now, at least."

Read more of this story at Slashdot.

Popular Linux Orgs Freedesktop, Alpine Linux Are Scrambling For New Web Hosting [Slashdot: Linux]

An anonymous reader quotes a report from Ars Technica: In what is becoming a sadly regular occurrence, two popular free software projects, X.org/Freedesktop.org and Alpine Linux, need to rally some of their millions of users so that they can continue operating. Both services have largely depended on free server resources provided by Equinix (formerly Packet.net) and its Metal division for the past few years. Equinix announced recently that it was sunsetting its bare-metal sales and services, or renting out physically distinct single computers rather than virtualized and shared hardware. As reported by the Phoronix blog, both free software organizations have until the end of April to find and fund new hosting, with some fairly demanding bandwidth and development needs. An issue ticket on Freedesktop.org's GitLab repository provides the story and the nitty-gritty needs of that project. Both the X.org foundation (home of the 40-year-old window system) and Freedesktop.org (a shared base of specifications and technology for free software desktops, including Wayland and many more) used Equinix's donated space. [...] Alpine Linux, a small, security-minded distribution used in many containers and embedded devices, also needs a new home quickly. As detailed in its blog, Alpine Linux uses about 800TB of bandwidth each month and also needs continuous integration runners (or separate job agents), as well as a development box. Alpine states it is seeking co-location space and bare-metal servers near the Netherlands, though it will consider virtual machines if bare metal is not feasible.

Read more of this story at Slashdot.

Debian Package Dependency Management: Handling Dependencies [Linux Journal - The Original Magazine of the Linux Community]

Debian Package Dependency Management: Handling Dependencies

Introduction

Debian-based Linux distributions, such as Ubuntu, Linux Mint, and Debian itself, rely on robust package management systems to install, update, and remove software efficiently. One of the most critical aspects of package management is handling dependencies—ensuring that all required libraries and packages are present for an application to function correctly.

Dependency management is crucial for maintaining system stability, avoiding broken packages, and ensuring software compatibility. This article explores how Debian handles package dependencies, how to manage them effectively, and how to troubleshoot common dependency-related issues.

Understanding Debian Package Management

Debian uses the .deb package format, which contains precompiled binaries, configuration files, and metadata describing the package, including its dependencies. The primary tools for handling Debian packages are:

  • dpkg: A low-level package manager used for installing, removing, and querying .deb packages.

  • APT (Advanced Package Tool): A high-level package management system that resolves dependencies automatically and fetches required packages from repositories.

Without proper dependency handling, installing a single package could become a nightmare of manually finding and installing supporting files. APT streamlines this process by automating dependency resolution.

How Dependencies Work in Debian

Dependencies ensure that an application has all the necessary libraries and components to function correctly. In Debian, dependencies are defined in the package’s control file. These dependencies are categorized as follows:

  • Depends: Mandatory dependencies required for the package to work.

  • Recommends: Strongly suggested dependencies that enhance functionality but are not mandatory.

  • Suggests: Optional packages that provide additional features.

  • Breaks: Indicates that a package is incompatible with certain versions of another package.

  • Conflicts: Prevents the installation of two incompatible packages.

  • Provides: Allows one package to act as a substitute for another (useful for virtual packages).

For example, if you attempt to install a software package using APT, it will automatically fetch and install all required dependencies based on the Depends field.

Managing Dependencies with APT

APT simplifies dependency management by automatically resolving and installing required packages. Some essential APT commands include:

  • Updating package lists: sudo apt update

Simplifying User Accounts and Permissions Management in Linux [Linux Journal - The Original Magazine of the Linux Community]

Simplifying User Accounts and Permissions Management in Linux

Linux, renowned for its robustness and security, is a powerful multi-user operating system that allows multiple people to interact with the same system resources without interfering with each other. Proper management of user accounts and permissions is crucial to maintaining the security and efficiency of a Linux system. This article provides an exploration of how to effectively manage user accounts and permissions in Linux.

Understanding User Accounts in Linux

User accounts are essential for individual users to access and operate Linux systems. They help in resource allocation, setting privileges, and securing the system from unauthorized access. There are mainly two types of user accounts:

  • Root account: This is the superuser account with full access to all commands and files on a Linux system. The root account has the power to do anything, including tasks that can potentially harm the system, hence it should be used sparingly.
  • Regular user accounts: These accounts have more limited permissions, generally confined to the user's home directory. Permissions for these accounts are set in a way that protects the core functionalities of the system from unintended disruptions.

Additionally, Linux systems also include various system accounts that are used to run services such as web servers, databases, and more.

Creating and Managing User Accounts

Creating a user account in Linux can be accomplished with the useradd or adduser commands. The adduser command is more interactive and user-friendly than useradd.

Creating a new user

sudo adduser newusername

This command creates a new user account and its home directory with default configuration files.

Setting user attributes
  • Password: Set or change passwords using the passwd command.
  • Home directory: Specify a home directory at creation with useradd -d /home/newusername newusername.
  • Login shell: Define the default shell with useradd -s /bin/bash newusername.
Modifying and deleting user accounts
  • To modify an existing user, use usermod. For example, sudo usermod -s /bin/zsh username changes the user's default shell to zsh.
  • To delete a user, along with their home directory, use userdel -r username.

Understanding Linux Permissions

In Linux, every file and directory has associated access permissions which determine who can read, write, or execute them.

03-02-2025

10:16

Facebook Flags Linux Topics As 'Cybersecurity Threats' [Slashdot: Linux]

Facebook has banned posts mentioning Linux-related topics, with the popular Linux news and discussion site, DistroWatch, at the center of the controversy. Tom's Hardware reports: A post on the site claims, "Facebook's internal policy makers decided that Linux is malware and labeled groups associated with Linux as being 'cybersecurity threats.' We tried to post some blurb about distrowatch.com on Facebook and can confirm that it was barred with a message citing Community Standards. DistroWatch says that the Facebook ban took effect on January 19. Readers have reported difficulty posting links to the site on this social media platform. Moreover, some have told DistroWatch that their Facebook accounts have been locked or limited after sharing posts mentioning Linux topics. If you're wondering if there might be something specific to DistroWatch.com, something on the site that the owners/operators perhaps don't even know about, for example, then it seems pretty safe to rule out such a possibility. Reports show that "multiple groups associated with Linux and Linux discussions have either been shut down or had many of their posts removed." However, we tested a few other Facebook posts with mentions of Linux, and they didn't get blocked immediately. Copenhagen-hosted DistroWatch says it has tried to appeal against the Community Standards-triggered ban. However, they say that a Facebook representative said that Linux topics would remain on the cybersecurity filter. The DistroWatch writer subsequently got their Facebook account locked... DistroWatch points out the irony at play here: "Facebook runs much of its infrastructure on Linux and often posts job ads looking for Linux developers." UPDATE: Facebook has admited they made a mistake and stopped blocking the posts.

Read more of this story at Slashdot.

02-02-2025

23:02

Facebook Admits Linux-Post Crackdown Was 'In Error', Fixes Moderation Error [Slashdot: Linux]

Tom's Hardware reports: Facebook's heavy-handed censorship of Linux groups and topics was "in error," the social media juggernaut has admitted. Responding to reports earlier this week, sparked by the curious censorship of the eminently wholesome DistroWatch, Facebook contacted PCMag to say that it had made a mistake and that the underlying issue had been rectified. "This enforcement was in error and has since been addressed. Discussions of Linux are allowed on our services," said a Meta rep to PCMag. That is the full extent of the statement reproduced by the source... Copenhagen-hosted DistroWatch says it has appealed against the Community Standards-triggered ban shortly after it noticed it was in effect (January 19). PCMag received the Facebook admission of error on January 28. The latest statement from DistroWatch, which now prefers posting on Mastodon, indicates that Facebook has lifted the DistroWatch links ban. More details from PCMag: Meta didn't say what caused the crackdown in the first place. But the company has been revamping some of its content moderation and plans to replace its fact-checking methodology with a user-driven Community Notes, similar to X. "We're also going to change how we enforce our policies to reduce the kind of mistakes that account for the vast majority of the censorship on our platforms," the company said earlier this month, in another irony. "Up until now, we have been using automated systems to scan for all policy violations, but this has resulted in too many mistakes and too much content being censored that shouldn't have been," Meta added in the same post.

Read more of this story at Slashdot.

01-02-2025

13:34

Android 16's Linux Terminal Runs Doom [Slashdot: Linux]

Google is enhancing Android 16's Linux Terminal app to support graphical Linux applications, so Android Authority decided to put it to the test by running Doom. From the report: The Terminal app first appeared in the Android 15 QPR2 beta as a developer option, and it still remains locked behind developer settings. Since its initial public release, Google pushed a few changes that fixed issues with the installation process and added a settings menu to resize the disk, forward ports, and backup the installation. However, the biggest changes the company has been working on, which include adding hardware acceleration support and a full graphical environment, have not been pushed to any public releases. Thankfully, since Google is working on this feature in the open, it's possible to simply compile a build of AOSP with these changes added in. This gives us the opportunity to trial upcoming features of the Android Linux Terminal app before a public release. To demonstrate, we fired up the Linux Terminal on a Pixel 9 Pro, tapped a new button on the top right to enter the Display activity, and then ran the 'weston' command to open up a graphical environment. (Weston is a reference implementation of a Wayland compositor, a modern display server protocol.) We also went ahead and enabled hardware acceleration beforehand as well as installed Chocolate Doom, a source port of Doom, to see if it would run. Doom did run, as you can see below. It ran well, which is no surprise considering Doom can run on literal potatoes. There wasn't any audio because an audio server isn't available yet, but audio support is something that Google is still working on.

Read more of this story at Slashdot.

Exploring LXC Containerization for Ubuntu Servers [Linux Journal - The Original Magazine of the Linux Community]

Exploring LXC Containerization for Ubuntu Servers

Introduction

In the world of modern software development and IT infrastructure, containerization has emerged as a transformative technology. It offers a way to package software into isolated environments, making it easier to deploy, scale, and manage applications. While Docker is the most popular containerization technology, there are other solutions that cater to different use cases and needs. One such solution is LXC (Linux Containers), which offers a more full-fledged approach to containerization, akin to lightweight virtual machines.

In this guide, we will explore how LXC works, how to set it up on Ubuntu Server, and how to leverage it for efficient and scalable containerization. Whether you're looking to run multiple isolated environments on a single server, or you want a lightweight alternative to virtualization, LXC can meet your needs. By the end of this article, you will have the knowledge to deploy, manage, and secure LXC containers on your Ubuntu Server setup.

What is LXC?

What are Linux Containers (LXC)?

LXC (Linux Containers) is an operating system-level virtualization technology that allows you to run multiple isolated Linux systems (containers) on a single host. Unlike traditional virtualization, which relies on hypervisors to emulate physical hardware for each virtual machine (VM), LXC containers share the host’s kernel while maintaining process and file system isolation. This makes LXC containers lightweight and efficient, with less overhead compared to VMs.

LXC offers a more traditional way of containerizing entire operating systems, as opposed to application-focused containerization solutions like Docker. While Docker focuses on packaging individual applications and their dependencies into containers, LXC provides a more complete environment that behaves like a full operating system.

Efficient Text Processing in Linux: Awk, Cut, Paste [Linux Journal - The Original Magazine of the Linux Community]

Efficient Text Processing in Linux: Awk, Cut, Paste

Introduction

In the world of Linux, the command line is an incredibly powerful tool for managing and manipulating data. One of the most common tasks that Linux users face is processing and extracting information from text files. Whether it's log files, configuration files, or even data dumps, text processing tools allow users to handle these files efficiently and effectively.

Three of the most fundamental and versatile text-processing commands in Linux are awk, cut, and paste. These tools enable you to extract, modify, and combine data in a way that’s quick and highly customizable. While each of these tools has a distinct role, together they offer a robust toolkit for handling various types of text-based data. In this article, we will explore each of these tools, showcasing their capabilities and providing examples of how they can be used in day-to-day tasks.

The cut Command

The cut command is one of the simplest yet most useful text-processing tools in Linux. It allows users to extract sections from each line of input, based on delimiters or character positions. Whether you're working with tab-delimited data, CSV files, or any structured text data, cut can help you quickly extract specific fields or columns.

Definition and Purpose

The purpose of cut is to enable users to cut out specific parts of a file. It's highly useful for dealing with structured text like CSVs, where each line represents a record and the fields are separated by a delimiter (e.g., a comma or tab).

Basic Syntax and Usage

cut -d [delimiter] -f [fields] [file]

  • -d [delimiter]: This option specifies the delimiter, which is the character that separates fields in the text. By default, cut treats tabs as the delimiter.
  • -f [fields]: This option is used to specify which fields you want to extract. Fields are numbered starting from 1.
  • [file]: The name of the file you want to process.
Examples of Common Use Cases
  1. Extracting columns from a CSV file

Suppose you have a CSV file called data.csv with the following content:

Name,Age,Location Alice,30,New York Bob,25,San Francisco Charlie,35,Boston

To extract the "Name" and "Location" columns, you would use:

cut -d ',' -f 1,3 data.csv

This will output:

Name,Location Alice,New York Bob,San Francisco Charlie,Boston

How to Configure Network Interfaces with Netplan on Ubuntu [Linux Journal - The Original Magazine of the Linux Community]

How to Configure Network Interfaces with Netplan on Ubuntu

Netplan is a modern network configuration tool introduced in Ubuntu 17.10 and later adopted as the default for managing network interfaces in Ubuntu 18.04 and beyond. With its YAML-based configuration files, Netplan simplifies the process of managing complex network setups, providing a seamless interface to underlying tools like systemd-networkd and NetworkManager.

In this guide, we’ll walk you through the process of configuring network interfaces using Netplan, from understanding its core concepts to troubleshooting potential issues. By the end, you’ll be equipped to handle basic and advanced network configurations on Ubuntu systems.

Understanding Netplan

Netplan serves as a unified tool for network configuration, allowing administrators to manage networks using declarative YAML files. These configurations are applied by renderers like:

  • systemd-networkd: Ideal for server environments.

  • NetworkManager: Commonly used in desktop setups.

The key benefits of Netplan include:

  1. Simplicity: YAML-based syntax reduces complexity.

  2. Consistency: A single configuration file for all interfaces.

  3. Flexibility: Supports both simple and advanced networking scenarios like VLANs and bridges.

Prerequisites

Before diving into Netplan, ensure you have the following:

  • A supported Ubuntu system (18.04 or later).

  • Administrative privileges (sudo access).

  • Basic knowledge of network interfaces and YAML syntax.

Locating Netplan Configuration Files

Netplan configuration files are stored in /etc/netplan/. These files typically end with the .yaml extension and may include filenames like 01-netcfg.yaml or 50-cloud-init.yaml.

Important Tips:
  • Backup existing configurations: Before making changes, create a backup with the command:

    sudo cp /etc/netplan/01-netcfg.yaml /etc/netplan/01-netcfg.yaml.bak
  • YAML Syntax Rules: YAML is indentation-sensitive. Always use spaces (not tabs) for indentation.

Configuring Network Interfaces with Netplan

Here’s how you can configure different types of network interfaces using Netplan.

Step 1: Identify Network Interfaces

Before modifying configurations, identify available network interfaces using:

Navigating Service Management on Debian [Linux Journal - The Original Magazine of the Linux Community]

Navigating Service Management on Debian

Managing services effectively is a crucial aspect of maintaining any Linux-based system, and Debian, one of the most popular Linux distributions, is no exception. In modern Linux systems, Systemd has become the dominant init system, replacing traditional options like SysVinit. Its robust feature set, flexibility, and speed make it the preferred choice for system and service management. This article dives into Systemd, exploring its functionality and equipping you with the knowledge to manage services confidently on Debian.

What is Systemd?

Systemd is an init system and service manager for Linux operating systems. It is responsible for initializing the system during boot, managing system processes, and handling dependencies between services. Systemd’s design emphasizes parallelization, speed, and a unified approach to managing services and logging.

Key Features of Systemd:
  • Parallelized Service Startup: Systemd starts services in parallel whenever possible, improving boot times.

  • Unified Logging with journald: Centralized logging for system events and service output.

  • Consistent Configuration: Standardized unit files make service management straightforward.

  • Dependency Management: Ensures that services start and stop in the correct order.

Understanding Systemd Unit Files

At the core of Systemd’s functionality are unit files. These configuration files describe how Systemd should manage various types of resources or tasks. Unit files are categorized into several types, each serving a specific purpose.

Common Types of Unit Files:
  1. Service Units (.service): Define how services should start, stop, and behave.

  2. Target Units (.target): Group multiple units into logical milestones, like multi-user.target or graphical.target.

  3. Socket Units (.socket): Manage network sockets for on-demand service activation.

  4. Timer Units (.timer): Replace cron jobs by scheduling tasks.

  5. Mount Units (.mount): Handle filesystem mount points.

Structure of a Service Unit File:

A typical .service unit file includes the following sections:

27-01-2025

26-01-2025

20:11

Linux 6.14 Brings Some Systems Faster Suspend and Resume [Slashdot: Linux]

Amid the ongoing Linux 6.14 kernel development cycle, Phoronix spotted a pull request for ACPI updates which "will allow for faster suspend and resume cycles on some systems." Wikipedia defines ACPI as "an open standard that operating systems can use to discover and configure computer hardware components" for things like power management and putting unused hardware components to sleep. Phoronix reports: The ACPI change worth highlighting for Linux 6.14 is switching from msleep() to usleep_range() within the acpi_os_sleep() call in the kernel. This reduces spurious sleep time due to timer inaccuracy. Linux ACPI/PM maintainer Rafael Wysocki of Intel who authored this change noted that it could "spectacularly" reduce the duration of system suspend and resume transitions on some systems... Rafael explained in the patch making the sleep change: "The extra delay added by msleep() to the sleep time value passed to it can be significant, roughly between 1.5 ns on systems with HZ = 1000 and as much as 15 ms on systems with HZ = 100, which is hardly acceptable, at least for small sleep time values." One 2022 bug report complained a Dell XPS 13 using Thunderbolt took "a full 8 seconds to suspend and a full 8 seconds to resume even though no physical devices are connected." In November an Intel engineer posted on the kernel mailing list that the fix gave a Dell XPS 13 a 42% improvement in kernel resume time (from 1943ms to 1127ms).

Read more of this story at Slashdot.

10:27

Could New Linux Code Cut Data Center Energy Use By 30%? [Slashdot: Linux]

Two computer scientists at the University of Waterloo in Canada believe changing 30 lines of code in Linux "could cut energy use at some data centers by up to 30 percent," according to the site Data Centre Dynamics. It's the code that processes packets of network traffic, and Linux "is the most widely used OS for data center servers," according to the article: The team tested their solution's effectiveness and submitted it to Linux for consideration, and the code was published this month as part of Linux's newest kernel, release version 6.13. "All these big companies — Amazon, Google, Meta — use Linux in some capacity, but they're very picky about how they decide to use it," said Martin Karsten [professor of Computer Science in the Waterloo's Math Faculty]. "If they choose to 'switch on' our method in their data centers, it could save gigawatt hours of energy worldwide. Almost every single service request that happens on the Internet could be positively affected by this." The University of Waterloo is building a green computer server room as part of its new mathematics building, and Karsten believes sustainability research must be a priority for computer scientists. "We all have a part to play in building a greener future," he said. The Linux Foundation, which oversees the development of the Linux OS, is a founder member of the Green Software Foundation, an organization set up to look at ways of developing "green software" — code that reduces energy consumption. Karsten "teamed up with Joe Damato, distinguished engineer at Fastly" to develop the 30 lines of code, according to an announcement from the university. "The Linux kernel code addition developed by Karsten and Damato was based on research published in ACM SIGMETRICS Performance Evaluation Review" (by Karsten and grad student Peter Cai). Their paper "reviews the performance characteristics of network stack processing for communication-heavy server applications," devising an "indirect methodology" to "identify and quantify the direct and indirect costs of asynchronous hardware interrupt requests (IRQ) as a major source of overhead... "Based on these findings, a small modification of a vanilla Linux system is devised that improves the efficiency and performance of traditional kernel-based networking significantly, resulting in up to 45% increased throughput..."

Read more of this story at Slashdot.

24-01-2025

10:36

Linux 6.14 Adds Support For The Microsoft Copilot Key Found On New Laptops [Slashdot: Linux]

The Linux 6.14 kernel now maps out support for Microsoft's "Copilot" key "so that user-space software can determine the behavior for handling that key's action on the Linux desktop," writes Phoronix's Michael Larabel. From the report: A change made to the atkbd keyboard driver on Linux now maps the F23 key to support the default copilot shortcut action. The patch authored by Lenovo engineer Mark Pearson explains [...]. Now it's up to the Linux desktop environments for determining what to do if the new Copilot key is pressed. The patch was part of the input updates now merged for the Linux 6.14 kernel.

Read more of this story at Slashdot.

20-01-2025

17:23

Linux 6.13 Released [Slashdot: Linux]

"Nothing horrible or unexpected happened last week," Linux Torvalds posted tonight on the Linux kernel mailing list, "so I've tagged and pushed out the final 6.13 release." Phoronix says the release has "plenty of fine features": Linux 6.13 comes with the introduction of the AMD 3D V-Cache Optimizer driver for benefiting multi-CCD Ryzen X3D processors. The new AMD EPYC 9005 "Turin" server processors will now default to AMD P-State rather than ACPI CPUFreq for better power efficiency.... Linux 6.13 also brings more Rust programming language infrastructure and more. Phoronix notes that Linux 6.13 also brings "the start of Intel Xe3 graphics bring-up, support for many older (pre-M1) Apple devices like numerous iPads and iPhones, NVMe 2.1 specification support, and AutoFDO and Propeller optimization support when compiling the Linux kernel with the LLVM Clang compiler." And some lucky Linux kernel developers will also be getting a guitar pedal soldered by Linus Torvalds himself, thanks to a generous offer he announced a week ago: For _me_ a traditional holiday activity tends to be a LEGO build or two, since that's often part of the presents... But in addition to the LEGO builds, this year I also ended up doing a number of guitar pedal kit builds ("LEGO for grown-ups with a soldering iron"). Not because I play guitar, but because I enjoy the tinkering, and the guitar pedals actually do something and are the right kind of "not very complex, but not some 5-minute 555 LED blinking thing"... [S]ince I don't actually have any _use_ for the resulting pedals (I've already foisted off a few only unsuspecting victims^Hfriends), I decided that I'm going to see if some hapless kernel developer would want one.... as an admittedly pretty weak excuse to keep buying and building kits... "It may be worth noting that while I've had good success so far, I'm a software person with a soldering iron. You have been warned... [Y]ou should set your expectations along the lines of 'quality kit built by a SW person who doesn't know one end of a guitar from the other.'"

Read more of this story at Slashdot.

19-12-2024

21:05

LibreOffice 24.2.7 is now available – the last release in the 24.2 branch [Press Releases Archives - The Document Foundation Blog]

Berlin, 31 October 2024 – LibreOffice 24.2.7, the seventh and final planned minor update to the LibreOffice 24.2 branch, is available on our download page for Windows, macOS and Linux.

The release includes over 50 bug and regression fixes over LibreOffice 24.2.6 [1] to improve the stability and robustness of the software, as well as interoperability with legacy and proprietary document formats. LibreOffice 24.2.7 is aimed at mainstream users and enterprise production environments.

LibreOffice is the only office suite with a feature set comparable to the market leader, and offers a range of user interface options to suit all users, from traditional to modern Microsoft Office-style. The UI has been developed to make the most of different screen form factors by optimizing the space available on the desktop to put the maximum number of features just a click or two away.

LibreOffice for Enterprises

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a range of dedicated value-added features, long term support and other benefits such as SLAs: LibreOffice in Business.

Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and contributes to the improvement of the LibreOffice Technology platform.

Availability of LibreOffice 24.2.7

LibreOffice 24.2.7 is available from our download page. Minimum requirements for proprietary operating systems are Windows 7 SP1 and macOS 10.15. Products based on LibreOffice Technology for Android and iOS are listed here: www.libreoffice.org/download/android-and-ios/.

This is planned to be the last minor update to the LibreOffice 24.2 branch, which reaches end-of-life in November. All users are then recommended to upgrade to the LibreOffice 24.8 stable branch.

LibreOffice users, free software advocates and community members can support The Document Foundation by making a donation on our donate page.

[1] Fixes in RC1: wiki.documentfoundation.org/Releases/24.2.7/RC1. Fixes in RC2: wiki.documentfoundation.org/Releases/24.2.7/RC2.

The Document Foundation announces the LibreOffice and Open Source Conference 2024 [Press Releases Archives - The Document Foundation Blog]

Berlin, 25 September 2024 – The LibreOffice and Open Source Conference 2024 will take place in Luxembourg from the 10 to the 12 October 2024. It will be hosted by the Digital Learning Hub and the local campus of 42 Luxembourg at the Terres Rouges buildings in Belval, Esch-sur-Alzette.

This is a key event that brings together the LibreOffice community – supporting the leading FOSS office suite – with a large number of stakeholders: large open source projects, international organizations and representatives from EU institutions and European government departments.

Organized in partnership with the Luxembourg Media & Digital Design Centre (LMDDC), which will host the EdTech track, the event is sponsored by allotropia and Collabora, the two companies contributing more actively to the development of LibreOffice; Passbolt, the Luxembourg made open source password manager for teams; and the Interdisciplinary Centre for Security, Reliability and Trust (SnT) of the University of Luxembourg.

In addition, local partners such as Luxembourg Convention Bureau, LIST, LU-CIX and Luxembourg House of Cybersecurity are supporting the organization of various aspects of the conference.

After the opening session in the morning of the 10 October, which includes institutional presentations from the Minister for Digitalisation, the Ministry of the Economy and the European Commission’s OSPO, there will be tracks about LibreOffice covering development, quality, security, documentation, localization, marketing and enterprise deployments, and tracks about open source covering technologies in education, OSS applications and cybersecurity. Another session will focus on OSPOs (Open Source Programme Officers).

The LibreOffice and Open Source Conference Luxembourg 2024 provides a platform to discuss the latest technical developments, community contributions, and the challenges facing open source software and communities of which TDF, LibreOffice and its community are important components. Professionals, developers, volunteers and users from various fields will share their experiences and collaborate on the future direction of the leading office suite.

Policy and decision makers will find counterparts from all over Europe with which they will be able to exchange ideas and experiences that will help them to promote and implement open source software in public, education and private sector organizations.

On 11 and 12 October, there will also be workshops focusing on different aspects of LibreOffice development, targeted to undergraduate Computer Science students or anyone who knows programming, and wants to become familiar with a large scale real world open source software project. To be able to better support the participants we limited the number of seats to 20 so register for the workshops as soon as possible to reserve your place.

Everyone is encouraged to register and participate in the conference to engage with the open source community, learn from different experts and contribute to meaningful discussions. Please note that, to avoid waste, we will plan for food, drinks and other free items for registered attendees so help us to cater for your needs by registering in time.

12-09-2024

17:41

LibreOffice 24.2.6 available for download, for the privacy-conscious user [Press Releases Archives - The Document Foundation Blog]

Berlin, 5 September 2024 – LibreOffice 24.2.6, the sixth minor release of the free, volunteer-supported office productivity suite for office environments and individuals, the best choice for privacy-conscious users and digital sovereignty, is available at https://www.libreoffice.org/download for Windows, macOS and Linux.

The release includes over 40 bug and regression fixes over LibreOffice 24.2.5 [1] to improve the stability and robustness of the software, as well as interoperability with legacy and proprietary document formats. LibreOffice 24.2.6 is aimed at mainstream users and enterprise production environments.

LibreOffice is the only office suite with a feature set comparable to the market leader, and offers a range of user interface options to suit all users, from traditional to modern Microsoft Office-style. The UI has been developed to make the most of different screen form factors by optimizing the space available on the desktop to put the maximum number of features just a click or two away.

LibreOffice for Enterprises

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a range of dedicated value-added features, long term support and other benefits such as SLAs: https://www.libreoffice.org/download/libreoffice-in-business/.

Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and contributes to the improvement of the LibreOffice Technology platform.

Availability of LibreOffice 24.2.6

LibreOffice 24.2.6 is available at https://www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Windows 7 SP1 and macOS 10.15. Products based on LibreOffice Technology for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/.

Next week, power users and technology enthusiasts will be able to download LibreOffice 24.8.1, the first minor release of the recently announced new version with many bug and regression fixes. A summary of the new features of the LibreOffice 24.8 ifamily s available on this blog post: https://blog.documentfoundation.org/blog/2024/08/22/libreoffice-248/.

End users looking for support will be helped by the immediate availability of the LibreOffice 24.8 Getting Started Guide, which is available for download from the following link: https://books.libreoffice.org/. In addition, they will be able to get first-level technical support from volunteers on user mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org.

LibreOffice users, free software advocates and community members can support the Document Foundation by making a donation at https://www.libreoffice.org/donate.

[1] Fixes in RC1: https://wiki.documentfoundation.org/Releases/24.2.6/RC1. Fixes in RC2: https://wiki.documentfoundation.org/Releases/24.2.6/RC2.

LibreOffice 24.8, for the privacy-conscious office suite user [Press Releases Archives - The Document Foundation Blog]

The new major release provides a wealth of new features, plus a large number of interoperability improvements

Berlin, 22 August 2024 – LibreOffice 24.8, the new major release of the free, volunteer-supported office suite for Windows (Intel, AMD and ARM), macOS (Apple and Intel) and Linux is available from our download page. This is the second major release to use the new calendar-based numbering scheme (YY.M), and the first to provide an official package for Windows PCs based on ARM processors.

LibreOffice is the only office suite, or if you prefer, the only software for creating documents that may contain personal or confidential information, that respects the privacy of the user – thus ensuring that the user is able to decide if and with whom to share the content they have created. As such, LibreOffice is the best option for the privacy-conscious office suite user, and provides a feature set comparable to the leading product on the market. It also offers a range of interface options to suit different user habits, from traditional to contemporary, and makes the most of different screen sizes by optimising the space available on the desktop to put the maximum number of features just a click or two away.

The biggest advantage over competing products is the LibreOffice Technology engine, the single software platform on which desktop, mobile and cloud versions of LibreOffice – including those provided by ecosystem companies – are based. This allows LibreOffice to offer a better user experience and to produce identical and perfectly interoperable documents based on the two available ISO standards: the Open Document Format (ODT, ODS and ODP), and the proprietary Microsoft OOXML (DOCX, XLSX and PPTX). The latter hides a large amount of artificial complexity, which may create problems for users who are confident that they are using a true open standard.

End users looking for support will be helped by the immediate availability of the LibreOffice 24.8 Getting Started Guide, which is available for download from the Bookshelf. In addition, they will be able to get first-level technical support from volunteers on user mailing lists and the Ask LibreOffice website.

New Features of LibreOffice 24.8

PRIVACY

  • If the option Tools ▸ Options ▸ LibreOffice ▸ Security ▸ Options ▸ Remove personal information on saving is enabled, then personal information will not be exported (author names and timestamps, editing duration, printer name and config, document template, author and date for comments and tracked changes)

WRITER

  • UI: handling of formatting characters, width of comments panel, selection of bullets, new dialog for hyperlinks, new Find deck in the sidebar
  • Navigator: adding cross-references by drag-and-drop items, deleting footnotes and endnotes, indicating images with broken links
  • Hyphenation: exclude words from hyphenation with new contextual menu and visualization, new hyphenation across columns, pages or spreads, hyphenation between constituents of a compound word

CALC

  • Addition of FILTER, LET, RANDARRAY, SEQUENCE, SORT, SORTBY, UNIQUE, XLOOKUP and XMATCH functions
  • Improvement of threaded calculation performance, optimization of redraw after a cell change by minimizing the area that needs to be refreshed
  • Cell focus rectangle moved apart from cell content
  • Comments can be edited and deleted from the Navigator’s right-click menu

IMPRESS & DRAW

  • In Normal view, it is now possible to scroll between slides, and the Notes are available as a collapsible pane under the slide
  • By default, the running Slideshow is now immediately updated when applying changes in EditView or in PresenterConsole, even on different Screens

CHART

  • New chart types “Pie-of-Pie” and “Bar-of-Pie” break down a slice of a pie as a pie or bar sub-chart respectively (this also enables import of such charts from OOXML files created with Microsoft Office)
  • Text inside chart’s titles, text boxes and shapes (and parts thereof) can now be formatted using the Character dialog

ACCESSIBILITY

  • Several improvements to the management of formatting options, which can be now announced properly by screen readers

SECURITY

  • New mode of password-based ODF encryption

INTEROPERABILITY

  • Support importing and exporting OOXML pivot table (cell) format definitions
  • PPTX files with heavy use of custom shapes now open faster

A video showcasing the most significant new features is available on YouTube and PeerTube.

Contributors to LibreOffice 24.8

There are 171 contributors to the new features of LibreOffice 24.8: 57% of code commits come from the 49 developers employed by companies on TDF’s Advisory Board – Collabora, allotropia and Red Hat – and other organisations, another 20% from seven developers at The Document Foundation, and the remaining 23% from 115 individual volunteer developers.

An additional 188 volunteers have committed localized strings in 160 languages, representing hundreds of people actually providing translations. LibreOffice 24.8 is available in 120 languages, more than any other desktop software, making it available to over 5.5 billion people in their native language. In addition, over 2.4 billion people speak one of these 120 languages as a second language (L2).

LibreOffice for Enterprises

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: LibreOffice in Business.

Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and improves the LibreOffice Technology platform. Products based on LibreOffice Technology are available for all major desktop operating systems (Windows, macOS, Linux and ChromeOS), mobile platforms (Android and iOS) and the cloud.

Migrations to LibreOffice

The Document Foundation has developed a migration protocol to help companies move from proprietary office suites to LibreOffice, based on the deployment of an LTS (long-term support) enterprise-optimised version of LibreOffice plus migration consulting and training provided by certified professionals who offer value-added solutions consistent with proprietary offerings. Reference: professional support page.

In fact, LibreOffice’s mature code base, rich feature set, strong support for open standards, excellent compatibility and LTS options from certified partners make it the ideal solution for organisations looking to regain control of their data and break free from vendor lock-in.

Availability of LibreOffice 24.8

LibreOffice 24.8 is available on our download page. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 [1] and Apple MacOS 10.15. LibreOffice Technology-based products for Android and iOS are listed on this page.

For users who don’t need the latest features and prefer a version that has undergone more testing and bug fixing, The Document Foundation maintains the LibreOffice 24.2 family, which includes several months of back-ported fixes. The current release is LibreOffice 24.2.5.

LibreOffice users, free software advocates and community members can support The Document Foundation with a donation on our donate page.

[1] This does not mean that The Document Foundation suggests the use of this operating system, which is no longer supported by Microsoft itself, and as such should not be used for security reasons.

Release Notes: wiki.documentfoundation.org/ReleaseNotes/24.8

Press Kit with Images: nextcloud.documentfoundation.org/s/JEe8MkDZWMmAGmS

22-08-2024

17:51

Announcement of LibreOffice 24.2.5 Community, optimized for the privacy-conscious user [Press Releases Archives - The Document Foundation Blog]

Berlin, 11 July 2024 – LibreOffice 24.2.5 Community, the fifth minor release of the free, volunteer-supported office productivity suite for office environments and individuals, the best choice for privacy-conscious users and digital sovereignty, is available at www.libreoffice.org/download for Windows, macOS and Linux.

The release includes more than 70 bug and regression fixes over LibreOffice 24.2.4 [1] to improve the stability and robustness of the software, as well as interoperability with legacy and proprietary document formats. LibreOffice 24.2.5 Community is the most advanced version of the office suite and is aimed at power users but can be used safely in other environments.

LibreOffice is the only office suite with a feature set comparable to the market leader. It also offers a range of interface options to suit all users, from traditional to modern Microsoft Office-style, and makes the most of different screen form factors by optimising the space available on the desktop to put the maximum number of features just a click or two away.

LibreOffice for Enterprises

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a range of dedicated value-added features, long term support and other benefits such as SLAs: www.libreoffice.org/download/libreoffice-in-business/

Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and contributes to the improvement of the LibreOffice Technology platform. All products based on that platform share the same approach, optimised for the privacy-conscious user.

Availability of LibreOffice 24.2.5 Community

LibreOffice 24.2.5 Community is available at www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple macOS 10.15. Products based on LibreOffice Technology for Android and iOS are listed here: www.libreoffice.org/download/android-and-ios/

For users who don’t need the latest features and prefer a version that has undergone more testing and bug fixing, The Document Foundation maintains a version with some months of back-ported fixes. The current release has reached the end of life, so users should update to LibreOffice 24.2.5 when the new major release LibreOffice 24.8 becomes available in August.

The Document Foundation does not provide technical support for users, although they can get it from volunteers on user mailing lists and the Ask LibreOffice website: ask.libreoffice.org

LibreOffice users, free software advocates and community members can support the Document Foundation by making a donation at www.libreoffice.org/donate

[1] Fixes in RC1: wiki.documentfoundation.org/Releases/24.2.5/RC1. Fixes in RC2: wiki.documentfoundation.org/Releases/24.2.5/RC2.

LibreOffice 24.2.4 Community available for download [Press Releases Archives - The Document Foundation Blog]

Berlin, 6 June 2024 – LibreOffice 24.2.4 Community, the fourth minor release of the free, volunteer-supported office suite for personal productivity in office environments, is now available at https://www.libreoffice.org/download for Windows, MacOS and Linux.

The release includes over 70 bug and regression fixes over LibreOffice 24.2.3 [1] to improve the stability and robustness of the software. LibreOffice 24.2.4 Community is the most advanced version of the office suite, offering the best features and interoperability with Microsoft Office proprietary formats.

LibreOffice is the only office suite with a feature set comparable to the market leader. It also offers a range of interface options to suit all user habits, from traditional to modern, and makes the most of different screen form factors by optimising the space available on the desktop to put the maximum number of features just a click or two away.

LibreOffice for Enterprises

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: https://www.libreoffice.org/download/libreoffice-in-business/

Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and contributes to the improvement of the LibreOffice Technology platform.

Availability of LibreOffice 24.2.4 Community

LibreOffice 24.2.4 Community is available at https://www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple MacOS 10.15. Products based on LibreOffice Technology for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/

For users who don’t need the latest features and prefer a version that has undergone more testing and bug fixing, The Document Foundation maintains the LibreOffice 7.6 family, which includes several months of back-ported fixes. The current release is LibreOffice 7.6.7 Community, but it will soon be replaced exactly by LibreOffice 24.2.4 when the new major release LibreOffice 24.8 becomes available.

The Document Foundation does not provide technical support for users, although they can get it from volunteers on user mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org

LibreOffice users, free software advocates and community members can support the Document Foundation by making a donation at https://www.libreoffice.org/donate.

[1] Fixes in RC1: https://wiki.documentfoundation.org/Releases/24.2.4/RC1. Fixes in RC2: https://wiki.documentfoundation.org/Releases/24.2.4/RC2.

21-03-2024

19:41

LVM Logische volumen [linux blogs franz ulenaers]

LVM = Logical Volume Manager



Een partitie van het type = "Linux LVM" kan gebruikt worden voor logische volumen maar ook als "snapshot" !
Een snapshot kan een exact kopie zijn van een logische volume dat bevrozen is op een bepaald ogenblik : dit maakt het mogelijk om consistente backups te maken van logische volumen
terwijl de logische volumen in gebruik zijn !

Hoe installeren ?

    sudo apt-get install lvm2



Cre�er een fysisch volume voor een partitie

    commando = �pvcreate� partitie

      voorbeeld :

        partitie moet van het type = "Linux LVM" zijn !

        pvcreate /dev/sda5



cre�er een fysisch volume groep

    vgcreate vg_storage partitie

      voorbeeld

        vgcreate mijnvg /dev/sda5



voeg een logische volume toe in een volume groep

    lvcreate -L grootte_in_M/G -n logische_volume_naam volume_groep

      voorbeeld :

        lvcreate -L 30G -n mijnhome mijnvg



activeer een volume groep

    vgchange -a y naam_volume_groep

      voorbeeld :

        vgchange -a y mijnvg



Mijn fysische en logische volumen

    fysische volume

      pvcreate /dev/sda1

    fysische volume groep

      vgcreate mydell /dev/sda1

    logische volumen

      lvcreate -L 1G -n boot mydell

      lvcreate -L 100G -n data mydell

      lvcreate -L 50G -n home mydell

      lvcreate -L 50G -n root mydell

      lvcreate -L 1G swap mydell



Logische volume vergroten/verkleinen

    mijn home logische volume vergroten met 1 G

      lvextend -L +1G /dev/mapper/mydell-home

    let op een logische volume verkleinen kan leiden tot gegevens verlies indien er te weinig plaats is .... !

lvreduce -L -1G /dev/mapper/mydell-home



toon fysische volume

sudo pvs

    worden getoond : PV fysische volume , VG volume groep , Fmt formaat (normaal = lvm2) , Attr attribuut, Psize groote PV, PFree vtije plaats

      PV             VG       Fmt  Attr PSize      PFree

      /dev/sda6 mydell lvm2   a--  920,68g  500,63g

sudo pvs -a

sudo pvs /dev/sda6



Backup instellingen Logische volumen

    zie bijgeleverde script LVM_bkup



toon volume groep

    sudo vgs

VG       #PV #LV #SN  Attr    VSize     VFree

mydell    1       6       0    wz--n- 920,68g 500,63g



toon logische volume(n)

    sudo lvs

      LV            VG     Attr        LSize   Pool Origin Data% Meta% Move Log Cpy%Sync Convert

      boot       mydell -wi-ao---- 952,00m

      data       mydell -wi-ao---- 100,00g

      home      mydell -wi-ao----  93,13g

      mintroot mydell -wi-a----- 101,00g

      root        mydell -wi-ao----  94,06g

      swap       mydell -wi-ao----  30,93g



hoe een logische volume wegdoen ?

    een logische volume wegdoen kan enkel maar als de fysische volume niet actief is

      dit kan met het vgchange commando

        vgchange -a n mydell

    lvremove /dev//mijn_volumegroup/naam_logische-volume

      voorbeeld :

lvremove /dev/mydell/data





hoe een fysische volume wegdoen ?

vgreduce mydell /dev/sda1




Bijlagen: LVM_bkup (0.8 KLB)




hoe een stick mounten en umounten zonder root te zijn en met je eigen rwx rechten ! [linux blogs franz ulenaers]

Stick mounten zonder root

hoe usb stick mounten en umounten zonder root te zijn en met rwx rechten ?
---------------------------------------------------------------------------------------------------------
(hernoem iedere ulefr01 naar je eigen gebruikersnaam!)

label stick

  • gebruik het 'fatlabel' commando om een volumenaam of label toe te kennen dit als je een vfat bestandensysteem gebruikt op je usb-stick

  • gebruik het commando 'tune2fs' voor een ext2,3,4

    • om een volumenaam stick32GB te maken op je usb_stick doe je met het commando :

sudo tune2fs -L stick32GB /dev/sdc1

noot : gebruik voor /dev/sdc1 hier het juiste device !


maak het filesysteem op je stick clean

  • mogelijk na het mounten zie dmesg messages : Volume was not properly unmounted. Some data may be corrupt. Please run fsck.

    • gebruik de file system consistency check commando fsck om dit recht te zetten

      • doe een umount voordat je het commando fsck uitvoer ! (gebruik het juiste device !)

        • fsck /dev/sdc1

noot: gebruik voor /dev/sdc1 hier je device !


rechten zetten op mappen en bestanden van je stick

  • Steek je stick in een usb poort en umount je stick

sudo chown ulefr01:ulefr01 /media/ulefr01/ -R
  • zet acl op je ext2,3,4 stick (werkt niet op een vfat !)

setfacl -m u:ulefr01:rwx /media/ulefr01
  • met getfact kun je acl zien

getfacl /media/ulefr01
  • met het ls commando kun je het resultaat zien

ls /media/ulefr01 -dla

drwxrwx--- 5 ulefr01 ulefr01 4096 okt 1 18:40 /media/ulefr01

noot: indien de �+� aanwezig is dan is acl reeds aanwezig, zoals op volgende lijn :

drwxrwx---+ 5 ulefr01 ulefr01 4096 okt 1 18:40 /media/ulefr01


Mount stick

  • Steek je stick in een usb poort en kijk of mounten automatisch gebeurd

  • check rechten van bestaande bestanden en mappen op je stick

ls * -la

  • indien root of andere rechten reeds aanwezig , herzetten met volgend commando

sudo chown ulefr01:ulefr01 /media/ulefr01/stick32GB -R

Maak map voor ieder stick

  • cd /media/ulefr01

  • mkdir mmcblk16G stick32GB stick16gb


aanpassen /etc/fstab

  • voeg een lijn toe voor iedere stick

    • voorbeelden

LABEL=mmcblk16G /media/ulefr01/mmcblk16G ext4 user,exec,defaults,noatime,acl,noauto 0 0
LABEL=stick32GB /media/ulefr01/stick32GB ext4 user,exec,defaults,noatime,acl,noauto 0 0
LABEL=stick16gb /media/ulefr01/stick16gb vfat user,defaults,noauto 0 0


Check het volgende

  • het volgende moet nu mogelijk zijn : 

    • mount en umount zonder root te zijn

    •  noot : je kunt de umount niet doen als de mount gedaan is door root ! Indien dat het geval is dan moet je eerst de umount met root ; daarna de mount als gebruiker dan kun je ook de umount doen . 

    • zet een nieuw bestand op je stick zonder root te zijn

    • zet een nieuw map op je stick zonder root te zijn

  • check of je nieuwe bestanden kunt aanmaken zonder root te zijn

        • touch test

        • ls test -la

        • rm test


Zet acl list [linux blogs franz ulenaers]

setfacl

noot: meestal mogelijk op linux bestandsystemen : btrfs, ext2, ext3, ext4 en Reiserfs  !

  • Hoe een acl zetten voor ��n gebruiker ?

setfacl -m u:ulefr01:rwx /home/ulefr01

noot: kies ipv ulefr01 hier je eigen gebruikersnaam

  • Hoe een acl afzetten ?

setfacl -x u:ulefr01 /home/ulefr01
  • Hoe een acl zetten voor twee of meer gebruikers ?

setfacl -m u:ulefr01:rwx /home/ulefr01

setfacl -m u:myriam:r-x /home/ulefr01

noot: kies ipv myriam je tweede gebruikersnaam; hier heeft myriam geen w write toegang maar wel r read en x exec !

  • Hoe een lijst opvragen van de ingestelde acl ?

getfacl home/ulefr01
getfacl: Voorafgaande '/' in absolute padnamen worden verwijderd
# file: home/ulefr01
# owner: ulefr01
# group: ulefr01
user::rwx
user:ulefr01:rwx
user:myriam:r-x 
group::---
mask::rwx
other::--- 
  • Hoe het resultaat nakijken ?

getfacl home/ulefr01
 zie hierboven
ls /home/ulefr01 -dla
drwxrwx---+  ulefr01 ulefr01 4096 okt 1 18:40  /home/ulefr01

zie + sign !


Het beste bestandensysteem (meest performant) op een USB stick , hoe opzetten ? [linux blogs franz ulenaers]

het beste bestandensysteem op een USB stick, hoe opzetten ?

het beste bestandensysteem (meest performant) is ext4

  • hoe opzetten ?

mkfs.ext4 $device
  • zet eerst journal af

tune2fs -O ^has_journal $device
  • doe journaling alleen met data_writeback

tune2fs -o journal_data_writeback $device
  • gebruik geen reserved spaces en zet het op nul.

tune2fs -m 0 $device


  • voor bovenstaande 3 acties kan bijgeleverde bash script gebruikt worden :



bestand USBperf

# USBperfext4


echo 'USBperf'

echo '--------'

echo 'ext4 device ?'

read device

echo "device= $device"

echo 'ok ?'

read ok

if [ $ok == ' ' ] || [ $ok == 'n' ] || [ $ok == 'N' ]

then

   echo 'nok - dus stoppen'

   exit 1

fi

echo "doe : no journaling ! tune2fs -O ^has_journal $device"

tune2fs -O ^has_journal $device

echo "use data mode for filesystem as writeback doe : tune2fs -o journal_data $device"

tune2fs -o journal_data_writeback $device

echo "disable reserved space "

tune2fs -m 0 $device

echo 'gedaan !'

read ok

echo "device= $device" 

exit 0


  • pas bestand /etc/fstab aan voor je USB

    • gebruik optie �noatime�

Encryptie [linux blogs franz ulenaers]

Met encryptie kan men de gegevens op je computer beveiligen, door de gegevens onleesbaar maken voor de buitenwereld !

Hoe kan men een bestandssysteem encrypteren ?

installeer de volgende open source pakketten :

    loop-aes-utils en cryptsetup

            apt-get install loop-aes-utils

            apt-get install cryptsetup

        modprobe cryptoloop
        voeg de volgende modules toe in je /etc/modules :
            aes
            dm_mod
           
dm_crypt
           
cryptoloop

Hoe een beveiligd bestandsysteem aanmaken ?

  1. dd if=/dev/zero of=/home/cryptfile bs=1M count=650
hiermee cre�er je een bestand van 650 M groot
  1. losetup -e aes /dev/loop0 /home/cryptfile
hierna wordt een paswoord gevraagd van minstens 20 karakters
  1. mkfs.ext3 /dev/loop0
maakt een ext3 bestandssysteem met journaling
  1. mkdir /mnt/crypt
                maakt een lege directory aan
  1. mount /dev/loop0 /mnt/crypt -t ext3
nu hebt je een bestandssysteem onder /mnt/crypt ter beschikking

....

Je kunt automatisch je bestandssysteem beschikbaar maken door een volgende entry in je /etc/fstab :

/home/cryptfile /mnt/crypt ext3 auto,encryption=aes,user,exec 0 0

....

Je kunt je encryptie afzetten dmv.

umount /mnt/crypt


losetup -d /dev/loop0        (dit is niet meer nodig als je de volgende entry in jet /etc/fstab hebt :
                /home/cryptfile /mnt/crypt ext3 auto,encryption=aes,exec 0 0
....
Manueel mounten kun je met :
  • losetup -e aes /dev/loop0 /home/cryptfile
 er wordt gevraagd een paswoord van minstens 20 karakters in te vullen
indien het paswoord verkeerd is dan krijg je de volgende melding :
        mount: wrong fs type, bad option, bad superblock on /dev/loop0,
        or too many mounted file systems
        ..
  • mount /dev/loop0 /mnt/crypt -t ext3
hiermee kunt je het bestandssysteem mounten


14-03-2024

19:45

App Launchers for Ubuntu 19.04 [Tech Drive-in]

During the transition period, when GNOME Shell and Unity were pretty rough around the edges and slow to respond, 3rd party app launchers were a big deal. Overtime the newer desktop environments improved and became fast, reliable and predictable, reducing the need for a alternate app launchers.


As a result, many third-party app launchers have either slowed down development or simply seized to exist. Ulauncher seems to be the only one to have bucked the trend so far. Synpase and Kupfer on the other hand, though old and not as actively developed anymore, still pack a punch. Since Kupfer is too old school, we'll only be discussing Synapse and Ulauncher here.

Synapse

I still remember the excitement when I first reviewed Synapse more than 8 years ago. Back then, Synapse was something very unique to Linux and Ubuntu, and it still is in many ways. Though Synapse is not an active project that it used to be, the launcher still works great even in brand new Ubuntu 19.04.

synapse ubuntu 19.04
 
No need to meddle with PPAs and DEBs, Synapse is available in Ubuntu Software Center.

ulauncher ubuntu 19.04 disco
 
CLICK HERE to directly find and install Synapse from Ubuntu Software Center, or simply search 'Synapse' in USC. Launch the app afterwards. Once launched, you can trigger Synapse with Ctrl+Space keyboard shortcut.

Ulauncher

The new kid in the block apparently. But new doesn't mean it is lacking in any way. What makes Ulauncher quite unique are its extensions. And there is plenty to choose from.

ulauncher ubuntu 19.04

From an extension that lets you control your Spotify desktop app, to generic unit converters or simply timers, Ulauncher extesions has got you covered.

Let's install the app first. Download the DEB file for Debian/Ubuntu users and double-click the downloaded file to install it. To complete the installation via Terminal instead, do this:

OR

sudo dpkg -i ~/Downloads/ulauncher_4.3.2.r8_all.deb

Change filename/location if they are different in your case. And if the command reports dependency errors, make a force install using the command below.

sudo apt-get install -f

Done. Post install, launch the app from your app-list and you're good to go. Once started, Ulauncher will sit in your system tray by default. And just like Synapse, Ctrl+Space will trigger Ulauncher.


Installing extensions in Ulauncher is pretty straight forward too.


Find the extensions you want from Ulauncher Extensions page. Trigger a Ulauncher instance with Ctrl+Space and go to Settings > Extensions > Add extension. Provide the URL from the extension page and let the app do the rest.

A Standalone Video Player for Netflix, YouTube, Twitch on Ubuntu 19.04 [Tech Drive-in]

Snap apps are a godsend. ElectronPlayer is an Electron based app available on Snapstore that doubles up as a standalone media player for video streaming services such as Netflix, YouTube, Twitch, Floatplane etc.

And it works great on Ubuntu 19.04 "disco dingo". From what we've tested, Netflix works like a charm, so does YouTube. ElectronPlayer also has a picture-in-picture mode that let it run above desktop and full screen applications.

netflix player ubuntu 19.04

For me, this is great because I can free-up tabs on my Firefox window which are almost never clutter-free.
OR

Use the command below to install ElectronPlayer directly from Snapstore. Open Terminal (Ctrl+Alt+t) and copy:

sudo snap install electronplayer

Press ENTER and give password when asked.

After the process is complete, search for ElectronPlayer in you App list. Sign in to your favorite video streaming services and you are good to go. Let us know your feedback in the comments.

Howto Upgrade to Ubuntu 19.04 from Ubuntu 18.10, Ubuntu 18.04 LTS [Tech Drive-in]

As most of you should know already, Ubuntu 19.04 "disco dingo" has been released. A lot of things have changed, see our comprehensive list of improvements in Ubuntu 19.04. Though it is not really necessary to make the jump, I'm sure many here would prefer to have the latest and greatest from Ubuntu. Here's how you upgrade to Ubuntu 19.04 from Ubuntu 18.10 and Ubuntu 18.04.

Upgrading to Ubuntu 19.04 from Ubuntu 18.04 LTS is tricky. There is no way you can make the jump from Ubuntu 18.04 LTS directly to Ubuntu 19.04. For that, you need to upgrade to Ubuntu 18.10 first. Pretty disappointing, I know. But when upgrading an entire OS, you can't be too careful.

And the process itself is not as tedious or time consuming à la Windows. And also unlike Windows, the upgrades are not forced upon you while you're in middle of something.

how to upgrade to ubuntu 19.04

If you wonder how the dock in the above screenshot rest at the bottom of Ubuntu desktop, it's called dash-to-dock GNOME Shell extension. That and more Ubuntu 19.04 tips and tricks here.

Upgrade to Ubuntu 19.04 from Ubuntu 18.10

Disclaimer: PLEASE backup your critical data before starting the upgrade process.

Let's start with the assumption that you're on Ubuntu 18.04 LTS.

After running the upgrade from Ubuntu 18.04 LTS from Ubuntu 18.10, the prompt will ask for a full system reboot. Please do that, and make sure everything is running smoothly afterwards. Now you have clean new Ubuntu 18.10 up and running. Let's begin the Ubuntu 19.04 upgrade process.
  • Make sure your laptop is plugged-in, this is going to take time. Stable Internet connection is a must too. 
  • Run your Software Updater app, and install all the updates available. 
how to upgrade to ubuntu 19.04 from ubuntu 18.10

  • Post the update, you should be prompted with an "Ubuntu 19.04 is available" window. It will guide you through the required steps without much hassle. 
  • If not, fire up Software & Updates app and check for updates. 
  • If both these didn't work in your case, there's always the commandline option to make the force upgarde. Open Terminal app (keyboard shortcut: CTRL+ALT+T), and run the command below.
sudo do-release-upgrade -d
  • Type the password when prompted. Don't let the simplicity of the command fool you, this is just the start of a long and complicated process. do-release command will check for available upgrades and then give you an estimated time and bandwidth required to complete the process. 
  • Read the instructions carefully and proceed. The process only takes about an hour or less for me. It entirely depends on your internet speed and system resources.
So, how did it go? Was the upgrade process smooth as it should be? And what do you think about new Ubuntu 19.04 "disco dingo"? Let us know in the comments.

15 Things I Did Post Ubuntu 19.04 Installation [Tech Drive-in]

Ubuntu 19.04, codenamed "Disco Dingo", has been released (and upgrading is easier than you think). I've been on Ubuntu 19.04 since its first Alpha, and this has been a rock solid release as far I'm concerned. Changes in Ubuntu 19.04 are more evolutionary though, but availability of the latest Linux Kernel version 5.0 is significant.

ubuntu 19.04 things to do after install

Unity is long gone and Ubuntu 19.04 is indistinguishably GNOME 3.x now, which is not necessarily a bad thing. Yes, I know, there are many who still swear by the simplicity of Unity desktop. But I'm an outlier here, I liked both Unity and GNOME 3.x even in their very early avatars. When I wrote this review of GNOME Shell desktop almost 8 years ago, I knew it was destined for greatness. Ubuntu 19.04 "Disco Dingo" runs GNOME 3.32.0.


We'll discuss more about GNOME 3.x and Ubuntu 19.04 in the official review. Let's get down to brass tacks. A step-by-step guide into things I did after installing Ubuntu 19.04 "Disco Dingo". 

1. Make sure your system is up-to-date

Do a full system update. Fire up your Software Updater and check for updates.

how to update ubuntu 19.04

OR
via Terminal, this is my preferred way to update Ubuntu. Just one command.

sudo apt update && sudo apt dist-upgrade

Enter password when prompted and let the system do the rest.

2. Install GNOME Tweaks

GNOME Tweaks is non-negotiable.

things to do after installing ubuntu 19.04

GNOME Tweaks is an app the lets you tweak little things in GNOME based OSes that are otherwise hidden behind menus. If you are on Ubuntu 19.04, Tweaks is a must. Honestly, I don't remember if it was installed as a default. But here you install it anyway, Apt-URL will prompt you if the app already exists.

Search for Gnome Tweaks in Ubuntu Software Center. OR simply CLICK HERE to go straight to the app in Software Center. OR even better, copy-paste this command in Terminal (keyboard shortcut: CTRL+ALT+T).

sudo apt install gnome-tweaks

3. Enable MP3/MP4/AVI Playback, Adobe Flash etc.

You do have an option to install most of the 'restricted-extras' while installing the OS itself now, but if you are not-sure you've ticked all the right boxes, just run the following command in Terminal.

sudo apt install ubuntu-restricted-extras

OR

You can install it straight from the Ubuntu Software Center by CLICKING HERE.

4. Display Date/Battery Percentage on Top Panel  

The screenshot, I hope, is self explanatory.

things to do after installing ubuntu 19.04

If you have GNOME Tweaks installed, this is easily done. Open GNOME tweaks, goto 'Top Bar' sidemenu and enable/disable what you need.

5. Enable 'Click to Minimize' on Ubuntu Dock

Honestly, I don't have a clue why this is disabled by default. You intuitively expect the apps shortcuts on Ubuntu dock to 'minimize' when you click on it (at least I do).

In fact, the feature is already there, all you need to do is to switch it ON. Do this is Terminal.

gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'

That's it. Now if you didn't find the 'click to minimize' feature useful, you can always revert Dock settings back to its original state, by copy-pasting the following command in Terminal app.

gsettings reset org.gnome.shell.extensions.dash-to-dock click-action

6. Pin/Unpin Apps from Launcher

There are a bunch of apps that are pinned to your Ubuntu launcher by default.

things to do after ubuntu 19.04
 
For example, I almost never use the 'Help' app or the 'Amazon' shortcut preloaded on launcher. But I would prefer a shortcut to Terminal app instead. Right-click on your preferred app on the launcher, and add-to/remove-from favorites as you please.

7. Enable GNOME Shell Exetensions Support

Extensions are an integral part of GNOME desktop.

It's a real shame that one has to go through all these for such a basic yet important feature. From the default Firefox browser, when you visit GNOME Extensions page, you will notice the warning message on top describing the unavailability of Extensions support.
Now for the second part, you need to install the host connector on Ubuntu.
sudo apt install chrome-gnome-shell
  • Done. Don't mind the "chrome" in 'chrome-gnome-shell', it works with all major browsers, provided you've the correct browser add-on installed. 
  • You can now visit GNOME Extensions page and install extensions as you wish with ease. (if it didn't work immediately, a system restart will clear things up). 
Extensions are such an integral part of GNOME Desktop experience, can't understand why this is not a system default in Ubuntu 19.04. Hope future releases of Ubuntu will have this figured out.

8. My Favourite 5 GNOME Shell Extensions for Ubuntu 19.04


9. Remove Trash Icon from Desktop

Annoyed by the permanent presence of Home and Trash icons in the desktop? You are not alone. Luckily, there's an extension for that!
Done. Now, access the settings and enable/disable icons as you please. 


Extension settings can be accessed directly from the extension home page (notice the small wrench icon near the ON/OFF toggle). OR you can use the Extensions addon like in the screenshot above.

10. Enable/Disable Two Finger Scrolling

As you must've noticed, two-finger scrolling is a system default for sometime now. 

things to do after installing ubuntu cosmic
 
One of my laptops act strangely when two-finger scrolling is on. You can easily disable two-finger scrolling and enable old school edge-scrolling in 'Settings'.  Settings > Mouse and Touchpad

Quicktip: You can go straight to submenus by simply searching for it in GNOME's universal search bar.

ubuntu 19.04 disco

Take for example the screenshot above, where I triggered the GNOME menu by hitting Super(Windows) key, and simply searched for 'mouse' settings. The first result will take me directly to the 'Settings' submenu for 'Mouse and Touchpad' that we saw earlier. Easy right? More examples will follow.

11. Nightlight Mode ON

When you're glued to your laptop/PC screen for a large amount of time everyday, it is advisable that you enable the automatic nightlight mode for the sake of your eyes. Be it the laptop or my phone, this has become an essential feature. The sight of a LED display without nightlight ON during lowlight conditions immediately gives me a headache these days. Easily one of my favourite in-built features on GNOME.


Settings > Devices > Display > Night Light ON/OFF

things to do after installing ubuntu 19.04

OR as before, Hit superkey > search for 'night light'. It will take you straight to the submenu under Devices > Display. Guess you wouldn't need anymore examples on that.

things to do after installing ubuntu 19.04

12. Privacy on Ubuntu 19.04

Guess I don't need to lecture you on the importance of privacy in the post-PRISM era.

ubuntu 19.04 privacy

Ubuntu remembers your usage & history to recommend you frequently used apps and such. And this is never shared over the network. But if you're not comfortable with this, you can always disable and delete your usage history on Ubuntu. Settings > Privacy > Usage & History 

13. Perhaps a New Look & Feel?

As you might have noticed, I'm not using the default Ubuntu theme here.

themes ubuntu 19.04

Right now I'm using System 76's Pop OS GTK theme and icon sets. They look pretty neat I think. Just three commands to install it in your Ubuntu 19.04.

sudo add-apt-repository ppa:system76/pop
sudo apt-get update 
sudo apt install pop-icon-theme pop-gtk-theme pop-gnome-shell-theme 
sudo apt install pop-wallpapers 

Execute last command if you want Pop OS wallpapers as well. To enable the newly installed theme and icon sets, launch GNOME Tweaks > Appearance (see screenshot). I will be making separate posts on themes, icon sets and GNOME shell extensions. So stay subscribed. 

14. Disable Error Reporting

If you find the "application closed unexpectedly" popups annoying, and would like to disable error reporting altogether, this is what you need to do.


Settings > Privacy > Problem Reporting and switch it off. 

15. Liberate vertical space on Firefox by disabling Title Bar

This is not an Ubuntu specific tweak.


Firefox > Settings > Customize. Notice the "Title Bar" at the bottom left? Untick to disable.

Follow us on Facebook, and Twitter.

Ubuntu 19.04 Gets Newer and Better Wallpapers [Tech Drive-in]

A "Disco Dingo" themed wallpaper was already there. But the latest update bring a bunch of new wallpapers as system defaults on Ubuntu 19.04.

ubuntu 19.04 wallpaper

Pretty right? Here's the older one for comparison.

ubuntu 19.04 updates

The newer wallpaper is definitely cleaner, more professional looking with better colors. I won't bother tinkering with wallpapers anymore, the new default on Ubuntu 19.04 is just perfect.

ubuntu 19.04 wallpapers

Too funky for my taste. But I'm sure there will be many who will prefer this darker, edgier, wallpaper over the others. As we said earlier, the new "disco dingo" mascot calls for infinite wallpaper variations.


Apart from theme and artwork updates, Ubuntu 19.04 has the latest Linux Kernel version 5.0 (5.0.0.8 to be precise). You can read more about Ubuntu 19.04 features and updates here.

Ubuntu 19.04 hit beta a few days ago. Though it is a pretty stable release already for a beta, I'd recommend to wait for another 15 days or so until the final release. If all you care are the wallpapers, you can download the new Ubuntu 19.04 wallpapers here. It's a DEB file, just do a double click post download.

LinuxBoot: A Linux Foundation Project to replace UEFI Components [Tech Drive-in]

UEFI has a pretty bad reputation among many in the Linux community. UEFI unnecessarily complicated Linux installation and distro-hopping in Windows pre-installed machines, for example. Linux Boot project by Linux Foundation aims to replace some firmware functionality like the UEFI DXE phase with Linux components.

What is UEFI?
UEFI is a standard or a specification that replaced legacy BIOS firmware, which was the industry standard for decades. Essentially, UEFI defines the software components between operating system and platform firmware.


UEFI boot has three phases: SEC, PEI and DXE. Driver eXecution Environment or DXE Phase in short: this is where UEFI system loads drivers for configured devices. LinuxBoot will replaces specific firmware functionality like the UEFI DXE phase with a Linux kernel and runtime.

LinuxBoot and the Future of System Startup
"Firmware has always had a simple purpose: to boot the OS. Achieving that has become much more difficult due to increasing complexity of both hardware and deployment. Firmware often must set up many components in the system, interface with more varieties of boot media, including high-speed storage and networking interfaces, and support advanced protocols and security features."  writes Linux Foundation.

linuxboot uefi replacement

LinuxBoot will replace this slow and often error-prone code with a Linux Kernel. This alone should significantly improve system startup performance.

On top of that, LinuxBoot intends to achieve increased boot reliability and boot-time performance by removing unnecessary code and by using reliable Linux drivers instead of lightly tested firmware drivers. LinuxBoot claims that these improvements could potentially help make the system startup process as much as 20 times faster.

In fact, this "Linux to boot Linux" technique has been fairly common place in supercomputers, consumer electronics, and military applications, for decades. LinuxBoot looks to take this proven technique and improve on it so that it can be deployed and used more widely by individual users and companies.

Current Status
LinuxBoot is not as obscure or far-fetched as, say, lowRISC (open-source, Linux capable, SoC) or even OpenPilot. At FOSDEM 2019 summit, Facebook engineers revealed that their company is actively integrating and finetuning LinuxBoot to their needs for freeing hardware down to the lowest levels.


Facebook and Google are deeply involved in LinuxBoot project. Being large data companies, where even small improvements in system startup speed and reliability can bring major advantages, their involvement is not a surprise. To put this in perspective, a large data center run by Google or Facebook can have tens of thousands of servers. Other companies involved include Horizon Computing, Two Sigma and 9elements Cyber Security.

Look up Uber Time, Price Estimates on Terminal with Uber CLI [Tech Drive-in]

The worldwide phenomenon that is Uber needs no introduction. Uber is an immensely popular ride sharing, ride hailing, company that is valued in billions. Uber is so disruptive and controversial that many cities and even countries are putting up barriers to protect the interests of local taxi drivers.

Enough about Uber as a company. To those among you who regularly use Uber app for booking a cab, Uber CLI could be a useful companion.


Uber CLI can be a great tool for the easily distracted. This unique command line application allows you to look up Uber cab's time and price estimates without ever taking your eyes off the laptop screen.

Install Uber-CLI using NPM

You need to have NPM first to install Uber-CLI on Ubuntu. npm, short for Node.js package manager, is a package manager for the JavaScript programming language. It is the default package manager for the JavaScript runtime environment Node.js. npm has a command line based client and its own repository of packages.

This is how to install npm on Ubuntu 19.04, and Ubuntu 18.10. And thereafter, using npm, install Uber-CLI. Fire up the Terminal and run the following.

sudo apt update
sudo apt install nodejs npm
npm install uber-cli -g

And you're done. Uber CLI is a command line based application, here are a few examples of how it works in Terminal. Also, since Uber is not available where I live, I couldn't vouch for its accuracy.


Uber-CLI has just two use cases.
uber time 'pickup address here'
uber price -s 'start address' -e 'end address'
Easy right? I did some testing with places and addresses I'm familiar with, where Uber cabs are fairly common. And I found the results to be fairly accurate. Do test and leave feedback. Uber CLI github page for more info.

UBports Installer for Ubuntu Touch is just too good! [Tech Drive-in]

Even as someone who bought into the Ubuntu Touch hype very early, I was not expecting much from UBports to be honest. But to my pleasent surprise, UBports Installer turned my 4 year old BQ Aquaris E4.5 Ubuntu Edition hardware into a slick, clean, and usable phone again.



ubuntu phone 16.04
UBports Installer and Ubuntu Touch
As many of you know already, Ubuntu Touch was Canonical's failed attempt to deliver a competent mobile operating system based on its desktop version. The first Ubuntu Touch installed smartphone was released in 2015 by BQ, a Spanish smartphone manufacturer. And in April 2016, the world's first Ubuntu Touch based tablet, the BQ Aquaris M10 Ubuntu Edition, was released.

Though initial response was  quite promising, Ubuntu Touch failed to make a significant enough splash in the smartphone space. In fact, Ubuntu Touch was not alone, many other mobile OS projects like Firefox OS or even Samsung owned Tizen OS for that matter failed to capture a sizable market-share from Android/iOS duopoly.

To the disappointment of Ubuntu enthusiasts, Mark Shuttleworth announced the termination of Ubuntu Touch development in April, 2017.


Rise of UBports and revival of Ubuntu Touch Project
ubuntu touch 16.04For all its inadequacies, Ubuntu Touch was one unique OS. It looked and felt different from most other mobile operating systems. And Ubuntu Touch enthusiasts was not ready to give up on it so easily. Enter UBports.

UBports turned Ubuntu Touch into a community-driven project. Passionate people from around the world now contribute to the development of Ubuntu Touch. In August 2018, UBPorts released its OTA-4, upgrading the Ubuntu Touch's base from the Canonical's starting Ubuntu 15.04 (Vivid Vervet) to the nearest, current long-term support version Ubuntu 16.04 LTS.

They actively test the OS on a number of legacy smartphone hardware and help people install Ubuntu Touch on their smartphones using an incredibly capable, cross-platform, installer.

Ubuntu Touch Installer on Ubuntu 19.04
Though I knew about UBports project before, I was never motivated enough to try the new OS on my Aquaris E4.5, until yesterday. By sheer stroke of luck, I stumbled upon UBports Installer in Ubuntu Software Center. I was curious to find out if it really worked as it claimed on the page.

ubuntu touch installer on ubuntu 19.04

I fired up the app on my Ubuntu 19.04 and plugged in my Aquaris E4.5. Voila! the installer detected my phone in a jiffy. Since there wasn't much data on my BQ, I proceeded with Ubuntu Touch installation.

ubports ubuntu touch installer

The instructions were pretty straight forward and it took probably 15 minutes to download, restart, and install, 16.04 LTS based Ubuntu Touch on my 4 year old hardware.

ubuntu touch ubports

In my experience, even flashing an Android was never this easy! My Ubuntu phone is usable again without all the unnecessary bloat that made it clunky. This post is a tribute to the UBports community for the amazing work they've been doing with Ubuntu Touch. Here's also a list of smartphone hardware that can run Ubuntu Touch.

Retro Terminal that Emulates Old CRT Display (Ubuntu 18.10, 18.04 PPA) [Tech Drive-in]

We've featured cool-retro-term before. It is a wonderful little terminal emulator app on Ubuntu (and Linux) that adorns this cool retro look of the old CRT displays.

Let the pictures speak for themselves.

retro terminal ubuntu ppa

Pretty cool right? Not only does it look cool, it functions just like a normal Terminal app. You don't lose out on any features normally associated with a regular Terminal emulator. cool-retro-term comes with a bunch of themes and customisations that takes its retro cool appeal a few notches higher.

cool-old-term retro terminal ubuntu linux

Enough now, let's find out how you install this retro looking Terminal emulator on Ubuntu 18.04 LTS, and Ubuntu 18.10. Fire up your Terminal app, and run these commands one after the other.

sudo add-apt-repository ppa:vantuz/cool-retro-term
sudo apt update
sudo apt install cool-retro-term

Done. The above PPA supports Ubuntu Artful, Bionic and Cosmic releases (Ubuntu 17.10, 18.04 LTS, 18.10). cool-retro-term is now installed and ready to go.


Since I don't have Artful or Bionic installations in any of my computers, I couldn't test the PPA on those releases. Do let me know if you faced any issues while installing the app.

And as some of you might have noticed, I'm running cool-retro-term from an AppImage. This is because I'm on Ubuntu 19.04 "disco dingo", and obviously the app doesn't support an unreleased OS (well, duh!).

retro terminal ubuntu ppa

This is how it looks on fullscreen mode. If you are a non-Ubuntu user, you can find various download options here. If you are on Fedora or distros based on it, cool-retro-term is available in the official repositories.

Google's Stadia Cloud Gaming Service, Powered by Linux [Tech Drive-in]

Unless you live under a rock, you must've been inundated with nonstop news about Google's high-octane launch ceremony yesterday where they unveiled the much hyped game streaming platform called Stadia.

Stadia, or Project Stream as it was earlier called, is a cloud gaming service where the games themselves are hosted on Google's servers, while the visual feedback from the game is streamed to the player's device through Google Chrome. If this technology catches on, and if it works just as good as showed in the demos, Stadia could be what the future of gaming might look like.

Stadia, Powered by Linux

It is a fairly common knowledge that Google data centers use Linux rather extensively. So it is not really surprising that Google would use Linux to power its cloud based Stadia gaming service. 

google stadia runs on linux

Stadia's architecture is built on Google data center network which has extensive presence across the planet. With Google Stadia, Google is offering a virtual platform where processing resources can be scaled up to match your gaming needs without the end user ever spending a dime more on hardware.


And since Google data centers mostly runs on Linux, the games on Stadia will run on Linux too, through the Vulkan API. This is great news for gaming on Linux. Even if Stadia doesn't directly result in more games on Linux, it could potentially make gaming a platform agnostic cloud based service, like Netflix.

With Stadia, "the data center is your platform," claims Majd Bakar, head of engineering at Stadia. Stadia is not constrained by limitations of traditional console systems, he adds. Stadia is a "truly flexible, scalable, and modern platform" that takes into account the future requirements of the gaming ecosystem. When launched later this year, Stadia will be able to stream at 4K HDR and 60fps with surround sound.


Watch the full presentation here. Tell us what you think about Stadia in the comments.

Ubuntu 19.04 Updates - 7 Things to Know [Tech Drive-in]

Ubuntu 19.04 is scheduled to arrive in another 30 days has been released. I've been using it for the past week or so, and even as a pre-beta, the OS is pretty stable and not buggy at all. Here are a bunch of things you should know about the yet to be officially released Ubuntu 19.04.

what's new in ubuntu 19.04

1. Codename: "Disco Dingo"

How about that! As most of you know already, Canonical names its semiannual Ubuntu releases using an adjective and an animal with the same first letter (Intrepid Ibex, Feisty Fawn, or Maverick Meerkat, for example, were some of my favourites). And the upcoming Ubuntu 19.04 is codenamed "Disco Dingo", has to be one of the coolest codenames ever for an OS.


2. Ubuntu 19.04 Theme Updates

A new cleaner, crisper looking Ubuntu is coming your way. Can you notice the subtle changes to the default Ubuntu theme in screenshot below? Like the new deep-black top panel and launcher? Very tastefully done.

what's new in ubuntu 19.04

To be sure, this is now looking more and more like vanilla GNOME and less like Unity, which is not a bad thing.

ubuntu 19.04 updates

There are changes to the icons too. That hideous blue Trash icon is gone. Others include a new Update Manager icon, Ubuntu Software Center icon and Settings Icon.

3. Ubuntu 19.04 Official Mascot

GIFs speaks louder that words. Meet the official "Disco Dingo" mascot.



Pretty awesome, right? "Disco Dingo" mascot calls for infinite wallpaper variations.

4. The New Default Wallpaper

The new "Disco Dingo" themed wallpaper is so sweet: very Ubuntu-ish yet unique. A gray scale version of the same wallpaper is a system default too.

ubuntu 19.04 disco dingo features

UPDATE: There's a entire suit of newer and better wallpapers on Ubuntu 19.04!

5. Linux Kernel 5.0 Support

Ubuntu 19.04 "Disco Dingo" will officially support the recently released Linux Kernel version 5.0. Among other things, Linux Kernel 5.0 comes with AMD FreeSync display support which is awesome news to users of high-end AMD Radeon graphics cards.

ubuntu 19.04 features

Also important to note is the added support for Adiantum Data Encryption and Raspberry Pi touchscreens. Apart from that, Kernel 5.0 has regular CPU performance improvements and improved hardware support.

6. Livepatch is ON

Ubuntu 19.04's 'Software and Updates' app has a new default tab called Livepatch. This new feature should ideally help you to apply critical kernel patches without rebooting.

Livepatch may not mean much to a normal user who regularly powerdowns his or her computer, but can be very useful for enterprise users where any downtime is simply not acceptable.

ubuntu 19.04 updates

Canonical introduced this feature in Ubuntu 18.04 LTS, but was later removed when Ubuntu 18.10 was released. The Livepatch feature is disabled on my Ubuntu 19.04 installation though, with a "Livepatch is not available for this system" warning. Not exactly sure what that means. Will update.

7. Ubuntu 19.04 Release Schedule

The beta freeze is scheduled to happen on March 28th and final release on April 18th.

ubuntu 19.04 what's new

Normally, post the beta release, it is a safe to install Ubuntu 19.04 for normal everyday use in my opinion, but ONLY if you are inclined to give it a spin before everyone else of course. I'd never recommend a pre-release OS on production machines. Ubuntu 19.04 Daily Build Download.


My biggest disappointment though is the supposed Ubuntu Software Center revamp which is now confirmed to not make it to this release. Subscribe us on Twitter and Facebook for more Ubuntu 19.04 release updates.

ubuntu 19.04 disco dingo

Recommended read: Top things to do after installing Ubuntu 19.04

Purism: A Linux OS is talking Convergence again [Tech Drive-in]

The hype around "convergence" just won't die it seems. We have heard it from Ubuntu a lot, KDE, even from Google and Apple in fact. But the dream of true convergence, a uniform OS experience across platforms, never really materialised. Even behemoths like Apple and Googled failed to pull it off with their Android/iOS duopoly. Purism's Debian based PureOS wants to change all that for good.

pure os linux

Purism, PureOS, and the future of Convergence

Purism, a computer technology company based out of California, shot to fame for its Librem series of privacy and security focused laptops and smartphones. Purism raised over half a million dollars through a Crowd Supply crowdfunding campaign for its laptop hardware back in 2015. And unlike many crowdfunding megahits which later turned out to be duds, Purism delivered on its promises big time.


Later in 2017, Purism surprised everyone again with their successful crowdfunding campaign for its Linux based opensource smartphone, dubbed Librem 5. The campaign raised over $2.6 million and surpassed its 1.5 million crowdfunding goal in just in two weeks. Purism's Librem 5 smartphones will start shipping late 2019.

Librem, which loosely refers to free and opensource software, was the brand name chosen by Purism for its laptops/smartphones. One of the biggest USPs of Purism devices is the hardware kill switches that it comes loaded with, which physically disconnects phone's camera, WiFi, Bluetooth, and mobile broadband modem.

Meet PureOS, Purism's Debian Based Linux OS

PureOS is a free and opensource, Debian based Linux distribution which runs on all Librem hardware including its smartphones. PureOS is endorsed by Free Software Foundation. 

purism os linux

The term convergence in computer speak, refers to applications that can work seamlessly across platforms, and bring a consistent look and feel and similar functionality on your smartphone and your computer. 
"Purism is beating the duopoly to that dream, with PureOS: we are now announcing that Purism’s PureOS is convergent, and has laid the foundation for all future applications to run on both the Librem 5 phone and Librem laptops, from the same PureOS release", announced Jeremiah Foster, the PureOS director at Purism (by duopoly, he was referring to Android/iOS platforms that dominate smartphone OS ecosystem).
Ideally, convergence should be able to help app developers and users all at the same time. App developers should be able to write their app once, testing it once and running it everywhere. And users should be able to seamlessly use, connect and sync apps across devices and platforms.

Easier said than done though. As Jeremiah Foster himself explains:
"it turns out that this is really hard to do unless you have complete control of software source code and access to hardware itself. Even then, there is a catch; you need to compile software for both the phone’s CPU and the laptop CPU which are usually different architectures. This is a complex process that often reveals assumptions made in software development but it shows that to build a truly convergent device you need to design for convergence from the beginning."

How PureOS is achieving convergence?

PureOS have had a distinct advantage when it comes to convergence. Purism is a hardware maker that also designs its platforms and software. From its inception, Purism has been working on a "universal operating system" that can run on different CPU architectures.

librem opensource phone

"By basing PureOS on a solid, foundational operating system – one that has been solving this performance and run-everywhere problem for years – means there is a large set of packaged software that 'just works' on many different types of CPUs."

The second big factor is "adaptive design", software apps that can adapt for desktop or mobile easily, just like a modern website with responsive deisgn.


"Purism is hard at work on creating adaptive GNOME apps – and the community is joining this effort as well – apps that look great, and work great, both on a phone and on a laptop".

Purism has also developed an adaptive presentation library for GTK+ and GNOME, called libhandy, which the third party app developers can use to contribute to Purism's convergence ecosystem. Still under active development, libhandy is already packaged into PureOS and Debian.

Komorebi Wallpapers display Live Time & Date, Stunning Parallax Effect on Ubuntu [Tech Drive-in]

Live wallpapers are not a new thing. In fact we have had a lot of live wallpapers to choose from on Linux 10 years ago. Today? Not so much. In fact, be it GNOME or KDE, most desktops today are far less customizable than it used to be. Komorebi wallpaper manager for Ubuntu is kind of a way back machine in that sense.

ubuntu live wallpaper

Install Gorgeous Live Wallpapers in Ubuntu 18.10/18.04 using Komorebi

Komorebi Wallpaper Manager comes with a pretty neat collection of live wallpapers and even video wallpapers. The package also contains a simple tool to create your own live wallpapers.


Komorebi comes packaged in a convenient 64-bit DEB package, making it super easy to install in Ubuntu and most Debian based distros (latest version dropped 32-bit support though).  
ubuntu 18.10 live wallpaper

That's it! Komorebi is installed and ready to go! Now launch Komorebi from app launcher.

ubuntu komorebi live wallpaper

And finally, to uninstall Komorebi and revert all the changes you made, do this in Terminal (CTRL+ALT+T).

sudo apt remove komorebi

Komorebi works great on Ubuntu 18.10, and 18.04 LTS. A few more screenshots.

komorebi live wallpaper ubuntu

As you can see, live wallpapers obviously consume more resources than a regular wallpaper, especially when you switch on Komorebi's fancy video wallpapers. But it is definitely not a resource hog as I feared it would be.

ubuntu wallpaper live time and date

Like what you see here? Go ahead and give Komorebi Wallpaper Manager a spin. Does it turn out to be not as resource-friendly in your PC? Let us know your opinion in the comments. 

ubuntu live wallpapers

A video wallpaper example. To see them in action, watch this demo.

Snap Install Mario Platformer on Ubuntu 18.10, Ubuntu 18.04 LTS [Tech Drive-in]

Nintendo's Mario needs no introduction. This game defined our childhoods. Now you can install and have fun with an unofficial version of the famed Mario platformer in Ubuntu 18.10 via this Snap package.

install Mario on Ubuntu

Play Nintendo's Mario Unofficially on Ubuntu 18.10

"Mari0 is a Mario + Portal platformer game." It is not an official release and hence the slight name change (Mari0 instead of Mario). Mari0 is still in testing, and might not work as intended. It doesn't work fullscreen for example, but everything else seems to be working great in my PC.

But please be aware that this app is still in testing, and a lot of things can go wrong. Mari0 also comes with joystick support. Here's how you install unofficial Mari0 snap package. Do this in Terminal (CTRL+ALT+T)

sudo snap install mari0

To enable joystick support:

sudo snap connect mari0:joystick

nintendo mario ubuntu

Please find time to provide valuable feedback to the developer post testing, especially if something went wrong. You can also leave your feedback in the comments below.

Florida based Startup Builds Ubuntu Powered Aerial Robotics [Tech Drive-in]

Apellix is a Florida based startup that specialises in aerial robotics. They intend to create safer work environments by replacing workers with its task-specific drones to complete high-risk jobs at dangerous/elevated work sites.

ubuntu robotics

Robotics with an Ubuntu Twist

Ubuntu is expanding its reach into robotics and IoT in a big way. A few years ago at the TechCrunch Disrupt event, UAVIA unveiled a new generation of its one hundred percent remotely operable drones (an industry first, they claimed), which were built with Ubuntu under the hood. Then there were other like Erle Robotics (recently renamed to Acutronic Robotics) which made big strides in drone technology using Ubuntu at its core.


Apellix is the only aerial robotics company with drones "capable of making contact with structures through fully computer-controlled flight", claims Robert Dahlstrom, Founder and CEO of Apellix.

"At height, a human pilot cannot accurately gauge distance. At 45m off the ground, they can’t tell if they are 8cm or 80cm away from the structure. With our solutions, an engineer simply positions the drone near the inspection site, then the on-board computer takes over and automates the delicate docking process." He adds.


Apellix considered many popular Linux distributions before zeroing in on Ubuntu for its stability, reliability, and large developer ecosystem. Ubuntu's versatility also enabled Apellix to use the same underlying OS platform and software packages across development and production.

The team is currently developing on Ubuntu Server with the intent to migrate to Ubuntu Core. The company is also making extensive use of Ubuntu Server, both on-board its robotic systems and its cloud operations, according to a case study by Ubuntu's parent company, Canonical Foundation. 

apellix ubuntu drones

"With our aircraft, an error of 2.5 cm could be the difference between a successful flight and a crash," comments Dahlstrom. "Software is core to avoiding those errors and allowing us to do what we do - so we knew that placing the right OS at the heart of our solutions was essential." 

Openpilot: An Opensource Alternative to Tesla Autopilot, GM Super Cruise [Tech Drive-in]

Openpilot is an opensource driving agent which at the moment can perform industry-standard functions such as Adaptive Cruise Control and Lane Keeping Assist System for a select few auto manufacturers.


opensource autopilot system

Meet Project Openpilot

Opensource isn't a misnomer in the world of autonomous cars. Even as far back as in 2013, Ubuntu was spotted in Mercedes-Benz driverless cars, and it is also a well-known fact that Google is using a 'lightly customized Ubuntu' at the core of its push towards building fully autonomous cars. 

Openpilot though is unique in its own way. It's an opensource driving agent that already works (as is claimed) in a number of models from manufacturers such as Toyota, Kia, Honda, Chevrolet, Hyundai, Jeep, etc.


Above image: An Openpilot user getting a distracted alert. Apart from Adaptive Cruise Control (ACC) and Lane Keeping Assist System functions, Openpilot developers claims that their technology currently is "about on par with Tesla Autopilot and GM Super Cruise, and better than all other manufacturers."

If Tesla's Autopilot was iOS, Openpilot developers would like their product to become the "Android for cars", the ubiquitous software of choice when autonomous systems on cars goes universal.



The Openpilot-endorsed, officially supported list of cars keeps growing. It now includes some 40 odd models from manufacturers ranging from Toyota to Hyundai. And they are actively testing Openpilot on newer cars from VW, Subaru etc. according to their Twitter feed.

Even a lower variant of Tesla Model S which came without Tesla Autopilot system was upgraded with comma.ai's Openpilot solution which then mimicked a number of features from Tesla Autopilot, including automatic steering in highways according to this article. (comma.ai is the startup behind Openpilot)

Related read: Udacity's attempts to build a fully opensource self-driving car, and Linux Foundation's Automotive Grade Linux (AGL) infotainment system project which Toyota intends to use in its future cars.

Oranchelo - The icon theme to beat on Ubuntu 18.10 [Tech Drive-in]

OK, that might be an overstatement. But Oranchelo is good, really good.


Oranchelo Icons Theme for Ubuntu 18.10

Oranchelo is a flat-design icon theme originally designed for XFCE4 desktop. Though it works great on GNOME as well. I especially like the distinct take on Firefox and Chromium icons, as you can see in the screenshot.



Here's how you install Oranchelo icons theme on Ubuntu 18.10 using Oranchelo PPA. Just copy-paste the following three commands to Terminal (CTRL+ALT+T).

sudo add-apt-repository ppa:oranchelo/oranchelo-icon-theme
sudo apt update
sudo apt install oranchelo-icon-theme

Now run GNOME Tweaks, Appearance > Icons > Oranchelo.


Meet the artist behind Oranchelo icons theme at his deviantart page. So, how do you like the new icons? Let us know your opinion in the comments below.


11 Things I did After Installing Ubuntu 18.10 Cosmic Cuttlefish [Tech Drive-in]

Have been using "Cosmic Cuttlefish" since its first beta. It is perhaps one of the most visually pleasing Ubuntu releases ever. But more on that later. Now let's discuss what can be done to improve the overall user-experience by diving deep into the nitty gritties of Canonical's brand new flagship OS.

1. Enable MP3/MP4/AVI Playback, Adobe Flash etc.

This has been perhaps the standard 'first-thing-to-do' ever since the Ubuntu age dawned on us. You do have an option to install most of the 'restricted-extras' while installing the OS itself now, but if you are not-sure you've ticked all the right boxes, just run the following command in Terminal.

sudo apt install ubuntu-restricted-extras

OR

You can install it straight from the Ubuntu Software Center by CLICKING HERE.

2. Get GNOME Tweaks

GNOME Tweaks is non-negotiable.

things to do after installing ubuntu 18.10

GNOME Tweaks is an app the lets you tweak little things in GNOME based OSes that are otherwise hidden behind menus. If you are on Ubuntu 18.10, Tweaks is a must. Honestly, I don't remember if it was installed as a default. But here you install it anyway, Apt-URL will prompt you if the app already exists.


Search for Gnome Tweaks in Ubuntu Software Center. OR simply CLICK HERE to go straight to the app in Software Center. OR even better, copy-paste this command in Terminal (keyboard shortcut: CTRL+ALT+T).

sudo apt install gnome-tweaks

3. Displaying Date/Battery Percentage on Top Panel  

The screenshot, I hope, is self explanatory.

things to do after installing ubuntu 18.10

If you have GNOME Tweaks installed, this is easily done. Open GNOME tweaks, goto 'Top Bar' sidemenu and enable/disable what you need.

4. Enable 'Click to Minimize' on Ubuntu Dock

Honestly, I don't have a clue why this is disabled by default. You intuitively expect the apps shortcuts on Ubuntu dock to 'minimize' when you click on it (at least I do).

In fact, the feature is already there, all you need to do is to switch it ON. Do this is Terminal.

gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'

That's it. Now if you didn't find the 'click to minimize' feature useful, you can always revert Dock settings back to its original state, by copy-pasting the following command in Terminal app.

gsettings reset org.gnome.shell.extensions.dash-to-dock click-action

5. Pin/Unpin Useful Stuff from Launcher

There are a bunch of apps that are pinned to your Ubuntu launcher by default.

things to do after ubuntu 18.10
 
For example, I almost never use the 'Help' app or the 'Amazon' shortcut preloaded on launcher. But I would prefer a shortcut to Terminal app instead. Right-click on your preferred app on the launcher, and add-to/remove-from favorites as you please.

6. Enable/Disable Two Finger Scrolling

As you must've noticed, two-finger scrolling is a system default now. 

things to do after installing ubuntu cosmic
 
One of my laptops act strangely when two-finger scrolling is on. You can easily disable two-finger scrolling and enable old school edge-scrolling in 'Settings'.  Settings > Mouse and Touchpad

Quicktip: You can go straight to submenus by simply searching for it in GNOME's universal search bar.

ubuntu 18.10 cosmic

Take for example the screenshot above, where I triggered the GNOME menu by hitting Super(Windows) key, and simply searched for 'mouse' settings. The first result will take me directly to the 'Settings' submenu for 'Mouse and Touchpad' that we saw earlier. Easy right? More examples will follow.

7. Nightlight Mode ON

When you're glued to your laptop/PC screen for a large amount of time everyday, it is advisable that you enable the automatic nightlight mode for the sake of your eyes. Be it the laptop or my phone, this has become an essential feature. The sight of a LED display without nightlight ON during lowlight conditions immediately gives me a headache these days. Easily one of my favourite in-built features on GNOME.


Settings > Devices > Display > Night Light ON/OFF

things to do after installing ubuntu 18.10

OR as before, Hit superkey > search for 'night light'. It will take you straight to the submenu under Devices > Display. Guess you wouldn't need anymore examples on that.

things to do after installing ubuntu 18.10

8. Safe Eyes App for Ubuntu

A popup that will fill the entire screen and forces you to take your eyes off it.

apps for ubuntu 18.10

Apart from enabling the nighlight mode, Safe Eyes is another app I strongly recommend to those who stare at their laptops for long periods of time. This nifty little app forces you to take your eyes off the computer screen and do some standard eye-exercises at regular intervals (which you can change).

things to do after installing ubuntu 18.10

Installation is pretty straight forward. Just these 3 commands on your Terminal.

sudo add-apt-repository ppa:slgobinath/safeeyes
sudo apt update 
sudo apt install safeeyes 

9. Privacy on Ubuntu 18.10

Guess I don't need to lecture you on the importance of privacy in the post-PRISM era.

ubuntu 18.10 privacy

Ubuntu remembers your usage & history to recommend you frequently used apps and such. And this is never shared over the network. But if you're not comfortable with this, you can always disable and delete your usage history on Ubuntu. Settings > Privacy > Usage & History 

10. Perhaps a New Look & Feel?

As you might have noticed, I'm not using the default Ubuntu theme here.

themes ubuntu 18.10

Right now I'm using System 76's Pop OS GTK theme and icon sets. They look pretty neat I think. Just three commands to install it in your Ubuntu 18.10.

sudo add-apt-repository ppa:system76/pop
sudo apt-get update 
sudo apt install pop-icon-theme pop-gtk-theme pop-gnome-shell-theme 
sudo apt install pop-wallpapers 

Execute last command if you want Pop OS wallpapers as well. To enable the newly installed theme and icon sets, launch GNOME Tweaks > Appearance (see screenshot). I will be making separate posts on themes, icon sets and GNOME shell extensions. So stay subscribed. 

11. Disable Error Reporting

If you find the "application closed unexpectedly" popups annoying, and would like to disable error reporting altogether, this is what you need to do.

sudo gedit /etc/default/apport

This will open up a text editor window which has only one entry: "enabled=1". Change the value to '0' (zero) and you have Apport error reporting completely disabled.


Follow us on Facebook, and Twitter

RIOT OS: A tiny Opensource OS for the 'Internet of Things' (IoT) [Tech Drive-in]

"RIOT powers the Internet of Things like Linux powers the Internet." RIOT is a small, free and opensource operating system for the memory constrained, low power wireless IoT devices.


RIOT OS: A tiny OS for embedded systems

Initially developed by Freie Universität Berlin (FU Berlin), INRIA institute and HAW Hamburg, Riot OS has evolved over the years into a very competent alternative to TinyOS, Contiki etc. and now supports application programming with programming languages such as C and C++, and provides full multithreading and real-time capabilities. RIOT can run on 8-bit, 16-bit and 32-bit ARM Cortex processors.


RIOT is opensource, has its source code published on GitHub, and is based on a microkernel architecture (the bare minimum software required to implement an operating system). RIOT OS vs competition:

riot os for IoT

More information on RIOT OS can be found here. RIOT summits are held annually in major cities of Europe, if you are interested pin this up. Thank you for reading.

IBM, the 6th biggest contributor to Linux Kernel, acquires RedHat for $34 Billion [Tech Drive-in]

The $34 billion all cash deal to purchase opensource pioneer Red Hat is IBM's biggest ever acquisition by far. The deal will give IBM a major foothold in fast-growing cloud computing market and the combined entity could give stiff competition to Amazon's cloud computing platform, AWS. But what about Red Hat and its future?

ibm-redhat

Another Oracle - Sun Micorsystems deal in the making? 
The alarmists among us might be quick to compare the IBM - Red Hat deal with the decade old deal between Oracle Corporation and Sun Microsystems, which was then a major player in opensource software scene.

But fear not. Unlike Oracle (which killed off Sun's OpenSolaris OS almost immediately after acquisition and even started a patent war against Android using Sun's Java patents), IBM is already a major contributor to opensource software including the mighty Linux Kernel. In fact, IBM was the 6th biggest contributor to Linux kernel in 2017.

What's in it for IBM?
With the acquisition of Red Hat, IBM becomes the world's #1 hybrid cloud provider, "offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses", according to Ginni Rometty, IBM Chairman, President and CEO. She adds:

“Most companies today are only 20 percent along their cloud journey, renting compute power to cut costs. The next 80 percent is about unlocking real business value and driving growth. This is the next chapter of the cloud. It requires shifting business applications to hybrid cloud, extracting more data and optimizing every part of the business, from supply chains to sales.”

The Future of Red Hat
The Red Hat story is almost as old as Linux itself. Founded in 1993, RedHat's growth was phenomenal. Over the next two decades Red Hat went on to establish itself as the premier Linux company, and Red Hat OS was the enterprise Linux operating system of choice. It set the benchmark for others like Ubuntu, openSUSE and CentOS to follow. Red Hat is currently the second largest corporate contributor to the Linux kernel after Intel (Intel really stepped-up its Linux Kernel contributions post-2013).

Regular users might be more familiar with Fedora Project, a more user-friendly operating system maintained by Red Hat that competes with mainstream, non-enterprise operating systems like Ubuntu, elementary OS, Linux Mint or even Windows 10 for that matter. Will Red Hat be able to stay independent post acquisition?

According to the official press release, "IBM will remain committed to Red Hat’s open governance, open source contributions, participation in the open source community and development model, and fostering its widespread developer ecosystem. In addition, IBM and Red Hat will remain committed to the continued freedom of open source, via such efforts as Patent Promise, GPL Cooperation Commitment, the Open Invention Network and the LOT Network." Well, that's a huge relief.

In fact, IBM and Red Hat has been partnering each other for over 20 years, with IBM serving as an early supporter of Linux, collaborating with Red Hat to help develop and grow enterprise-grade Linux. And as IBM CEO mentioned, the acquisition is more of an evolution of the long-standing partnership between the two companies.
"Open source is the default choice for modern IT solutions, and I’m incredibly proud of the role Red Hat has played in making that a reality in the enterprise,” said Jim Whitehurst, President and CEO, Red Hat. “Joining forces with IBM will provide us with a greater level of scale, resources and capabilities to accelerate the impact of open source as the basis for digital transformation and bring Red Hat to an even wider audience – all while preserving our unique culture and unwavering commitment to open source innovation."
Predicting the future can be tricky. A lot of things can go wrong. But one thing is sure, the acquisition of Red Hat by IBM is nothing like the Oracle - Sun deal. Between them, IBM and Red Hat must have contributed more to the open source community than any other organization.

How to Upgrade from Ubuntu 18.04 LTS to 18.10 'Cosmic Cuttlefish' [Tech Drive-in]

One day left before the final release of Ubuntu 18.10 codenamed "Cosmic Cuttlefish". This is how you make the upgrade from Ubuntu 18.04 to 18.10.

Upgrade to Ubuntu 18.10 from 18.04

Ubuntu 18.10 has a brand new look!
As you can see from the screenshot, a lot has changed. Ubuntu 18.10 arrives with a major theme overhaul. After almost a decade, the default Ubuntu GTK theme ("Ambiance") is being replaced with a brand new one called "Yaru". The new theme is based heavily on GNOME's default "Adwaita" GTK theme. More on that later.

Upgrade from Ubuntu 18.04 LTS to 18.10
If you're on Ubuntu 18.04 LTS, upgrading to 18.10 "cosmic" is a pretty straight forward affair. Since 18.04 is a long-term support (LTS) release (meaning the OS will get official updates for about 5 years), it may not prompt you with an upgrade option when 18.10 finally arrives. 

So here's how it's done. Disclaimer: back up your critical data before going forward. And better don't try this on mission critical machines. You're on LTS anyway.
  • An up-to-date Ubuntu 18.04 LTS is the first step. Do the following in Terminal.
$ sudo apt update && sudo apt dist-upgrade
$ sudo apt autoremove
  • The first command will check for updates and then proceed with upgrading your Ubuntu 18.04 LTS with the latest updates. The "autoremove" command will clean up any and all dependencies that were installed with applications, and are no longer required.
  • Now the slightly tricky part. You need to edit the /etc/update-manager/release-upgrades file and change the Prompt=never entry to Prompt=normal  or else it will give a "no release found" error message. 
  • I used Vim to make the edit. But for the sake of simplicity, let's use gedit. 
$ sudo gedit /etc/update-manager/release-upgrades
  • Make the edit and save the changes. Now you are ready to go ahead with the upgrade. Make sure your laptop is plugged-in, this will take time. 
  • To be on the safer side, please make sure that there's at least 5GB of disk space left in your home partition (it will prompt you and exit if you don't have enough space required for the upgrade). 
$ sudo do-release-upgrade -d
  • That's it. Wait for a few hours and let it do its magic. 
My upgrade to Ubuntu 18.10 was uneventful. Nothing broke and it all worked like a charm. After the upgrade is done, you're probably still stuck with your old theme. Fire up "Gnome Tweaks" app (get it from App Store if you already haven't), and change the theme and the icons to "Yaru". 

Meet 'Project Fusion': An Attempt to Integrate Tor into Firefox [Tech Drive-in]

A real private mode in Firefox? A Tor integrated Firefox could just be that. Tor Project is currently working with Mozilla to integrate Tor into Firefox.


Over the years, and more so since Cambridge Analytica scandal, Mozilla has taken a progressively tougher stance on user privacy. Firefox's Facebook Container extension, for example, makes it much harder for Facebook to  collect data from your browsing activities (yep, that's a thing. Facebook is tracking your every move on the web). The extension now includes Facebook Messenger and Instagram as well.

Firefox with Tor Integration

For starters, Tor is a free software and an open network for anonymous communication over the web. "Tor protects you by bouncing your communications around a distributed network of relays run by volunteers all around the world: it prevents somebody watching your Internet connection from learning what sites you visit, and it prevents the sites you visit from learning your physical location."

And don't confuse this project with Tor Browser, which is web browser with Tor's elements built on top of Firefox stable builds. Tor Browser in its current form has many limitations. Since it is based on Firefox ESR, it takes a lot of time and effort to rebase the browser with new features from Firefox's stable builds every year or so.

Enter 'Project Fusion'

Now that Mozilla has officially taken over the works of integrating Tor into Firefox through Project Fusion, things could change for the better. With the intention of creating a 'super-private' mode in Firefox that supports First Party Isolation (which prevents cookies from tracking you across domains), Fingerprinting Resistance (which blocks user tracking through canvas elements), and Tor proxy, 'Project Fusion' is aiming big. To put it together, the goals of 'Project Fusion' can be condescend into four points.
  • Implementing fingerprinting resistance, make more user friendly and reduce web breakage.
  • Implement proxy bypass framework.
  • Figure out the best way to integrate Tor proxy into Firefox.
  • Real private browsing mode in Firefox, with First Party Isolation, Fingerprinting Resistance, and Tor proxy.
As good as it sounds, Project Fusion could still be years away or may not happen at all given the complexity of the work. According to a Tor Project Developer at Mozilla:
"Our ultimate goal is a long way away because of the amount of work to do and the necessity to match the safety of Tor Browser in Firefox when providing a Tor mode. There's no guarantee this will happen, but I hope it will and we will keep working towards it."
As If you want to help, Firefox bugs tagged 'fingerprinting' in the whiteboard are a good place to start. Further reading at TOR 'Project Fusion' page.

City of Bern Awards Switzerland's Largest Open Source Contract for its Schools [Tech Drive-in]

In another major win in a span of weeks for the proponents of open source solutions in EU, Bern, the capital of Switzerland, is pushing ahead with its plans to adopt open source tools as its software of choice for all its public schools. If all goes well, some 10,000 students in Switzerland schools could soon start getting their training using an IT infrastructure that is largely open source.

Switzerland's Largest Open Source deal

Over 10,000 Students to Benefit

Switzerland's largest open-source deal introduces a brand new IT infrastructure for the public schools of its capital city. The package includes Colabora Cloud Office, an online version of LibreOffice which is to be hosted in the City of Bern's data center, as its core component. Nextcloud, Kolab, Moodle, and Mahara are the other prominent open source tools included in the package. The contract is worth CHF 13.7 million over 6 years.

In an interview given to 'Der Bund', one of Switzerland's oldest news publications, open-source advocate Matthias Stürmer, EPP city council and IT expert, told that this is probably the largest ever open-source deal in Switzerland.

Many European countries are clamoring to adopt open source solutions for their cities and schools. From the recent German Federal Information Technology Centre's (ITZBund) selection of Nexcloud as their cloud solutions partner, to city of Turin's adoption of Ubuntu, to Italian Military's LibreOffice migration, Europe's recognition of open source solutions as a legitimate alternative is gaining ground.

Ironically enough, most of these software will run on proprietary iOS platform, as the clients given to students will be all Apple iPads. But hey, it had to start somewhere. When Europe's richest countries adopt open source, others will surely take notice. Stay tuned for updates. [via inside-channels.ch]

Germany says No to Public Cloud, Chooses Nextcloud's Open Source Solution [Tech Drive-in]

Germany's Federal Information Technology Centre (ITZBund) opts for an on-premise cloud solution which unlike those fancy Public cloud solutions, is completely private and under its direct control.

Germany's Open Source Migration

Given the recent privacy mishaps at some of biggest public cloud solution providers on the planet, it is only natural that government agencies across the world are opting for solutions that could provide users with more privacy and security. If the recent Facebook - Cambridge Analytica debacle is any indication, data vulnerability has become a serious national security concern for all countries. 

In light of these developments, government of Germany's IT service provider, ITZBund, has chosen Nextcloud as their cloud solutions partner. Nextcloud is a free and open source cloud solutions company based out of Europe that lets you to install and run its software on your private server. ITZBund has been running a pilot since 2016 with some 5000 users on Nextcloud's platform.
"Nextcloud is pleased to announce that the German Federal Information Technology Center (ITZBund) has chosen Nextcloud as their solution for efficient and secure file sharing and collaboration in a public tender. Nextcloud is operated by the ITZBund, the central IT service provider of the federal government, and made available to around 300,000 users. ITZBund uses a Nextcloud Enterprise Subscription to gain access to operational, scaling and security expertise of Nextcloud GmbH as well as long-term support of the software."
ITZBund employs about 2,700 people that include IT specialists, engineers and network and security professionals. After the successful completion of the pilot, a public tender was floated by ITZBund which eventually selected Nextcloud as their preferred partner. Nextcloud scored high on security requirements and scalability, which it addressed through its unique Apps concept.

LG Makes its webOS Operating System Open Source, Again! [Tech Drive-in]

Not many might remember HP's capable webOS. The open source webOS operating system was HP's answer to Android and iOS platforms. It was slick and very user-friendly from the start, some even considered it a better alternative to Android for Tablets at the time. But like many other smaller players, HP's webOS just couldn't find enough takers, and the project was abruptly ended and sold off of to LG.


The Open Source LG webOS

Under the 2013 agreement with HP Inc., LG Electronics had unlimited access to all webOS related documentation and source code. When LG took the project underground, webOS was still an open-source project.

After many years of development, webOS is now LG's platform of choice for its Smart TV division. It is generally considered as one of the better sorted Smart TV user interfaces. LG is now ready to take the platform beyond Smart TVs. LG has developed an open source version of its platform, called webOS Open Source Edition, now available to the public at webosose.org.

Dr. I.P. Park, CTO at LG Electronics had this to say, "webOS has come a long way since then and is now a mature and stable platform ready to move beyond TVs to join the very exclusive group of operating systems that have been successfully commercialization at such a mass level. As we move from an app-based environment to a web-based one, we believe the true potential of webOS has yet to be seen."

By open sourcing webOS, it looks like LG is gunning for Samsung's Tizen OS, which is also open source and built on top of Linux. In our opinion, device manufacturers preferring open platforms (like Automotive Grade Linux), over Android or iOS is a welcome development for the long-term health of the industry in general.

08-02-2024

13-12-2023

10:29

Staat New York doet kerstinkopen bij ASML [Computable]

De Amerikaanse staat New York gaat voor een miljard dollar chipmachines aanschaffen bij ASML. De investering maakt deel uit van een tien miljard kostend plan om nabij de Universiteit van Albany een nanotech-complex neer te zetten.

Sogeti mag verder sleutelen aan datawarehouse KB [Computable]

Sogeti wordt de komende drie jaar opnieuw de datapartner van de Koninklijke Bibliotheek (KB). Met een optie op verlenging tot maximaal zes jaar. Het it-bedrijf is sinds 2016 beheerder van het datawarehouse en krijgt nu als...

HPE haalt gen-ai-banden met Nvidia aan [Computable]

Infrastructuurspecialist Hewlett Packard Enterprise (HPE) gaat nauwer samenwerken met ai-hardware en softwareleverancier Nvidia. Samen bieden ze vanaf januari 2024 een krachtige enterprise computingoplossing voor generatieve artificiële intelligentie (gen-ai).

Econocom kondigt internationale tak aan: Gather [Computable]

De Frans-Belgische it-dienstverlener Econocom heeft een apart, internationaal opererend bedrijfsonderdeel opgezet onder de naam Gather. Deze tak bundelt de expertise op het gebied van audio-visuele oplossingen, unified communications en it-producten en -diensten, gericht op grotere organisaties...

Coalitie: verbeter fietsveiligheid met sensoren [Computable]

De pas opgerichte Coalition for Cyclist Safety, met fietsfabrikant Koninklijke Gazelle aan boord, spant zich in om de fietsveiligheid te verbeteren met behulp van sensortechnologie, ook wel vehicle-to-everything-technologie (v2x) genoemd. De auto-industrie geldt als lichtend voorbeeld;...

12-12-2023

13:39

Ambtenaar mag onder voorwaarden oefenen met gen-ai [Computable]

Het lukt het kabinet niet meer dit jaar nog met een totale visie op generatieve ai (gen-ai) te komen. De Tweede Kamer kan zo’n integraal beeld van de impact die deze technologie heeft op onze maatschappij...

Softwareleverancier Topdesk ontvangt groeigeld [Computable]

Topdesk uit Delft krijgt een kapitaalinjectie van tweehonderd miljoen euro voor groei en verdere ontwikkeling. CVC Capital Partners dat een minderheidsbelang neemt, gaat de leverancier van software voor servicemanagement meer slagkracht bieden.

Vier miljoen voor stimulering datacenter-onderwijs EU [Computable]

De Europese Commissie (EC) heeft een subsidie van vier miljoen euro toegekend aan het project Colleges for European Datacenter Education (Cedce). Doel hiervan is het aanbieden van kwalitatief hoogwaardig onderwijs gericht op datacenters. Het project start...

11-12-2023

22:26

Startup Nedscaper haalt Fox-IT-oprichter aan boord [Computable]

Mede-oprichter van ict-beveiliger Fox-IT, Menno van der Marel, wordt strategisch directeur van Nedscaper. Die Nederlands/Zuid-Afrikaanse startup levert securitydiensten voor Microsoft-omgevingen. Van der Marel steekt ook 2,2 miljoen euro in het bedrijf.

PQR-ceo Marijke Kasius schuift door naar Bechtle [Computable]

Bechtle benoemt per 1 januari Marijke Kasius tot landendirecteur voor de bedrijven van de groep in Nederland. De 39-jarige Kasius geeft momenteel samen met Marco Lesmeister leiding aan it-dienstverlener PQR. Die positie wordt ingenomen door Marc...

Oud-IBM- en Ajax-directeur Frank Kales overleden [Computable]

Frank Kales is op 8 december jongstleden overleden op 81-jarige leeftijd. Hij was onder voetbalkenners bekend als algemeen directeur van voetbalclub Ajax in de turbulente periode 1999-2000. Daarvoor werkte hij decennialang bij IBM waar hij uiteindelijk...

09-12-2023

23:56

EU AI Act jaagt softwarebedrijven op kosten [Computable]

De komst van de uitgebreide en soms ook diepgaande artificiële intelligentie (ai)-regelgeving waartoe EU-onderhandelaars afgelopen nacht overeenstemming hebben bereikt, zal niet zonder financiële gevolgen blijven voor ondernemers. &#39;We hebben een ai-deal. Maar wel een dure,&#39; zegt...

18:10

Historisch ai-akkoord EU legt ChatGPT aan banden [Computable]

De EU AI Act krijgt regels voor de ‘foundation models’ die aan de basis liggen van de enorme vooruitgang op gebied van ai. De Europese Commissie is het afgelopen nacht hierover eens geworden met het Europees...

Eset levert dns-filtering aan KPN-klanten [Computable]

Ict-beveiliger Eset levert domain name system (dns)-filtering aan telecombedrijf KPN. Met deze dienst zouden thuisnetwerken van KPN-klanten beter worden beschermd tegen malware, phishing en ongewenste inhoud.

08-12-2023

18:02

Overheden werken nog niet goed met Woo [Computable]

Overheidsorganisaties passen de nieuwe Wet open overheid (Woo) vaak nog niet effectief toe, voornamelijk door beperkte capaciteit en een gebrek aan prioriteit. Ambtenaren voelen zich bovendien beperkt in hun vrijheid om advies te geven. Dit blijkt...

West-Brabantse scholen helpen mkb via hackathon [Computable]

Studenten van de West-Brabantse onderwijsinstellingen Avans, BUas en Curio gaan ondernemers ondersteunen bij hun digitale ontwikkeling. In de zogeheten Digiwerkplaats Mkb vindt deze vrijdag een hackathon plaats, waarbij twintig Avans-studenten in groepjes een duurzaamheidsdashboard voor drie...

CWI organiseert Cobol-event voor meer urgentie [Computable]

Het Centrum Wiskunde & Informatica (CWI) organiseert 18 januari een evenement over de toekomst van Cobol en mainframes. Voor deze strategische Cobol-dag werkt het centrum samen met Quuks en Software Improvement Group (SIG). Volgens de organisatie...

Plan voor cloud-restricties splijt EU [Computable]

Een groot front vormt zich tegen de plannen van de Europese Commissie voor soevereiniteit-vereisten die vooral Franse cloudbedrijven bevoordelen. Nederland heeft zich in zijn verzet inmiddels verzekerd van de steun van dertien andere EU-lidstaten, waaronder Duitsland....

Unilever kiest weer voor warehousesysteem SAP [Computable]

Wegens de verdubbeling van de productiecapaciteit van de fabriek in het Hongaarse Nyirbator moest Unilever een nieuw plaatselijk, groter magazijn in gebruik nemen. Mét een nieuw warehousemanagementsysteem (wms). De keuze van het levensmiddelenconcern viel wederom op...

Lyvia Group neemt Facility Kwadraat over [Computable]

De Zweedse Lyvia Group pleegt zijn eerste overname in Nederland: Facility Kwadraat. Dit bedrijf uit Den Bosch levert software-as-a-service (saas) voor facility management, meerjarenonderhoud, huurbeheer en vastgoedbeheer.

Adoptie van generatieve ai verloopt traag [Computable]

Ondanks de grote belangstelling maakt een meerderheid van de grote ondernemingen nog geen gebruik van generatieve ai (gen-ai) zoals ChatGPT. Vooral de infrastructuur vormt een barrière bij de implementatie van de grote taalmodellen (llm&#39;s) die aan...

ASM steekt 300 miljoen in Amerikaanse expansie [Computable]

ASM, de toeleverancier van de chipindustrie die tot voor kort ASM International heette, gaat de komende vijf jaar driehonderd miljoen dollar investeren in de uitbreiding van zijn Amerikaanse operaties. De vestiging in Arizona wordt flink uitgebreid.

07-12-2023

10:03

Google met Gemini heel dicht bij OpenAI [Computable]

Met de lancering van Gemini, het grootste en meest ingenieuze artificiële intelligentie (ai)-taalmodel van Google, doet het techbedrijf een aanval op de leidende positie van OpenAI’s GPT-4. Volgens ai-experts is het verschil tussen beide grote taalmodellen...

06-12-2023

21:15

Hack Booking.com stelt reissector voor uitdaging [Computable]

De recente hack gericht op Booking.com zegt alles over de impact van cybercriminaliteit op de hotel- en reissector. Bij de oplichting werden de gegevens van klanten gestolen en te koop aangeboden op het darkweb. Hierbij werden...

13:56

Van Oord brengt klimaatrisico's in kaart [Computable]

Van Oord heeft een opensourcetool ontwikkeld die inzicht moet geven in de klimaatverandering en risico’s die daarmee gepaard gaan. Het bagger- en waterbouwbedrijf wil met die software die meerdere datalagen combineert, wereldwijd kustgebieden en ecosystemen in...

30-08-2021

11:12

Django Authentication Video Tutorial [Simple is Better Than Complex]

Updated at Nov 8, 2018: New video added to the series: How to integrate Django forms with Bootstrap 4.

In this tutorial series, we are going to explore Django’s authentication system by implementing sign up, login, logout, password change, password reset and protected views from non-authenticated users. This tutorial is organized in 8 videos, one for each topic, ranging from 4 min to 15 min each.


Setup

Starting a Django project from scratch, creating a virtual environment and an initial Django app. After that, we are going to setup the templates and create an initial view to start working on the authentication.

If you are already familiar with Django, you can skip this video and jump to the Sign Up tutorial below.


Sign Up

First thing we are going to do is implement a sign up view using the built-in UserCreationForm. In this video you are also going to get some insights on basic Django form processing.


Login

In this video tutorial we are going to first include the built-in Django auth URLs to our project and proceed to implement the login view.


Logout

In this tutorial we are going to include Django logout and also start playing with conditional templates, displaying different content depending if the user is authenticated or not.


Password Change

Next The password change is a view where an authenticated user can change their password.


Password Reset

This tutorial is perhaps the most complicated one, because it involves several views and also sending emails. In this video tutorial you are going to learn how to use the default implementation of the password reset process and how to change the email messages.


Protecting Views

After implementing the whole authentication system, this video gives you an overview on how to protect some views from non authenticated users by using the @login_required decorator and also using class-based views mixins.


Bootstrap 4 Forms

Extra video showing how to integrate Django with Bootstrap 4 and how to use Django Crispy Forms to render Bootstrap forms properly. This video also include some general advices and tips about using Bootstrap 4.


Conclusions

If you want to learn more about Django authentication and some extra stuff related to it, like how to use Bootstrap to make your auth forms look good, or how to write unit tests for your auth-related views, you can read the forth part of my beginners guide to Django: A Complete Beginner’s Guide to Django - Part 4 - Authentication.

Of course the official documentation is the best source of information: Using the Django authentication system

The code used in this tutorial: github.com/sibtc/django-auth-tutorial-example

This was my first time recording this kind of content, so your feedback is highly appreciated. Please let me know what you think!

And don’t forget to subscribe to my YouTube channel! I will post exclusive Django tutorials there. So stay tuned! :-)

09-07-2021

20:56

What You Should Know About The Django User Model [Simple is Better Than Complex]

The goal of this article is to discuss the caveats of the default Django user model implementation and also to give you some advice on how to address them. It is important to know the limitations of the current implementation so to avoid the most common pitfalls.

Something to keep in mind is that the Django user model is heavily based on its initial implementation that is at least 16 years old. Because user and authentication is a core part of the majority of the web applications using Django, most of its quirks persisted on the subsequent releases so to maintain backward compatibility.

The good news is that Django offers many ways to override and customize its default implementation so to fit your application needs. But some of those changes must be done right at the beginning of the project, otherwise it would be too much of a hassle to change the database structure after your application is in production.

Below, the topics that we are going to cover in this article:


User Model Limitations

First, let’s explore the caveats and next we discuss the options.

The username field is case-sensitive

Even though the username field is marked as unique, by default it is not case-sensitive. That means the username john.doe and John.doe identifies two different users in your application.

This can be a security issue if your application has social aspects that builds around the username providing a public URL to a profile like Twitter, Instagram or GitHub for example.

It also delivers a poor user experience because people doesn’t expect that john.doe is a different username than John.Doe, and if the user does not type the username exactly in the same way when they created their account, they might be unable to log in to your application.

Possible Solutions:

  • If you are using PostgreSQL, you can replace the username CharField with the CICharField instead (which is case-insensitive)
  • You can override the method get_by_natural_key from the UserManager to query the database using iexact
  • Create a custom authentication backend based on the ModelBackend implementation

The username field validates against unicode letters

This is not necessarily an issue, but it is important for you to understand what that means and what are the effects.

By default the username field accepts letters, numbers and the characters: @, ., +, -, and _.

The catch here is on which letters it accepts.

For example, joão would be a valid username. Similarly, Джон or 約翰 would also be a valid username.

Django ships with two username validators: ASCIIUsernameValidator and UnicodeUsernameValidator. If the intended behavior is to only accept letters from A-Z, you may want to switch the username validator to use ASCII letters only by using the ASCIIUsernameValidator.

Possible Solutions:

  • Replace the default user model and change the username validator to ASCIIUsernameValidator
  • If you can’t replace the default user model, you can change the validator on the form you use to create/update the user

The email field is not unique

Multiple users can have the same email address associated with their account.

By default the email is used to recover a password. If there is more than one user with the same email address, the password reset will be initiated for all accounts and the user will receive an email for each active account.

It also may not be an issue but this will certainly make it impossible to offer the option to authenticate the user using the email address (like those sites that allow you to login with username or email address).

Possible Solutions:

  • Replace the default user model using the AbstractBaseUser to define the email field from scratch
  • If you can’t replace the user model, enforce the validation on the forms used to create/update

The email field is not mandatory

By default the email field does not allow null, however it allow blank values, so it pretty much allows users to not inform a email address.

Also, this may not be an issue for your application. But if you intend to allow users to log in with email it may be a good idea to enforce the registration of this field.

When using the built-in resources like user creation forms or when using model forms you need to pay attention to this detail if the desired behavior is to always have the user email.

Possible Solutions:

  • Replace the default user model using the AbstractBaseUser to define the email field from scratch
  • If you can’t replace the user model, enforce the validation on the forms used to create/update

A user without password cannot initiate a password reset

There is a small catch on the user creation process that if the set_password method is called passing None as a parameter, it will produce an unusable password. And that also means that the user will be unable to start a password reset to set the first password.

You can end up in that situation if you are using social networks like Facebook or Twitter to allow the user to create an account on your website.

Another way of ending up in this situation is simply by creating a user using the User.objects.create_user() or User.objects.create_superuser() without providing an initial password.

Possible Solutions:

  • If in you user creation flow you allow users to get started without setting a password, remember to pass a random (and lengthy) initial password so the user can later on go through the password reset flow and set an initial password.

Swapping the default user model is very difficult after you created the initial migrations

Changing the user model is something you want to do early on. After your database schema is generated and your database is populated it will be very tricky to swap the user model.

The reason why is that you are likely going to have some foreign key created referencing the user table, also Django internal tables will create hard references to the user table. And if you plan to change that later on you will need to change and migrate the database by yourself.

Possible Solutions:

  • Whenever you are starting a new Django project, always swap the default user model. Even if the default implementation fit all your needs. You can simply extend the AbstractUser and change a single configuration on the settings module. This will give you a tremendous freedom and it will make things way easier in the future should the requirements change.

Detailed Solutions

To address the limitations we discussed in this article we have two options: (1) implement workarounds to fix the behavior of the default user model; (2) replace the default user model altogether and fix the issues for good.

What is going to dictate what approach you need to use is in what stage your project currently is.

  • If you have an existing project running in production that is using the default django.contrib.auth.models.User, go with the first solution implementing the workarounds;
  • If you are just starting your Django, start with the right foot and go with the solution number 2.

Workarounds

First let’s have a look on a few workarounds that you can implement if you project is already in production. Keep in mind that those solutions assume that you don’t have direct access to the User model, that is, you are currently using the default User model importing it from django.contrib.auth.models.

If you did replace the User model, then jump to the next section to get better tips on how to fix the issues.

Making username field case-insensitive

Before making any changes you need to make sure you don’t have conflicting usernames on your database. For example, if you have a User with the username maria and another with the username Maria you have to plan a data migration first. It is difficult to tell you what to do because it really depends on how you want to handle it. One option is to append some digits after the username, but that can disturb the user experience.

Now let’s say you checked your database and there are no conflicting usernames and you are good to go.

First thing you need to do is to protect your sign up forms to not allow conflicting usernames to create accounts.

Then on your user creation form, used to sign up, you could validate the username like this:

def clean_username(self):
    username = self.cleaned_data.get("username")
    if User.objects.filter(username__iexact=username).exists():
        self.add_error("username", "A user with this username already exists.")
    return username

If you are handling user creation in a rest API using DRF, you can do something similar in your serializer:

def validate_username(self, value):
    if User.objects.filter(username__iexact=value).exists():
        raise serializers.ValidationError("A user with this username already exists.")
    return value

In the previous example the mentioned ValidationError is the one defined in the DRF.

The iexact notation on the queryset parameter will query the database ignoring the case.

Now that the user creation is sanitized we can proceed to define a custom authentication backend.

Create a module named backends.py anywhere in your project and add the following snippet:

backends.py

from django.contrib.auth import get_user_model
from django.contrib.auth.backends import ModelBackend


class CaseInsensitiveModelBackend(ModelBackend):
    def authenticate(self, request, username=None, password=None, **kwargs):
        UserModel = get_user_model()
        if username is None:
            username = kwargs.get(UserModel.USERNAME_FIELD)
        try:
            case_insensitive_username_field = '{}__iexact'.format(UserModel.USERNAME_FIELD)
            user = UserModel._default_manager.get(**{case_insensitive_username_field: username})
        except UserModel.DoesNotExist:
            # Run the default password hasher once to reduce the timing
            # difference between an existing and a non-existing user (#20760).
            UserModel().set_password(password)
        else:
            if user.check_password(password) and self.user_can_authenticate(user):
                return user

Now switch the authentication backend in the settings.py module:

settings.py

AUTHENTICATION_BACKENDS = ('mysite.core.backends.CaseInsensitiveModelBackend', )

Please note that 'mysite.core.backends.CaseInsensitiveModelBackend' must be changed to the valid path, where you created the backends.py module.

It is important to have handled all conflicting users before changing the authentication backend because otherwise it could raise a 500 exception MultipleObjectsReturned.

Fixing the username validation to use accept ASCII letters only

Here we can borrow the built-in UsernameField and customize it to append the ASCIIUsernameValidator to the list of validators:

from django.contrib.auth.forms import UsernameField
from django.contrib.auth.validators import ASCIIUsernameValidator

class ASCIIUsernameField(UsernameField):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.validators.append(ASCIIUsernameValidator())

Then on the Meta of your User creation form you can replace the form field class:

class UserCreationForm(forms.ModelForm):
    # field definitions...

    class Meta:
        model = User
        fields = ("username",)
        field_classes = {'username': ASCIIUsernameField}
Fixing the email uniqueness and making it mandatory

Here all you can do is to sanitize and handle the user input in all views where you user can modify its email address.

You have to include the email field on your sign up form/serializer as well.

Then just make it mandatory like this:

class UserCreationForm(forms.ModelForm):
    email = forms.EmailField(required=True)
    # other field definitions...

    class Meta:
        model = User
        fields = ("username",)
        field_classes = {'username': ASCIIUsernameField}

    def clean_email(self):
        email = self.cleaned_data.get("email")
        if User.objects.filter(email__iexact=email).exists():
            self.add_error("email", _("A user with this email already exists."))
        return email

You can also check a complete and detailed example of this form on the project shared together with this post: userworkarounds

Replacing the default User model

Now I’m going to show you how I usually like to extend and replace the default User model. It is a little bit verbose but that is the strategy that will allow you to access all the inner parts of the User model and make it better.

To replace the User model you have two options: extending the AbstractBaseUser or extending the AbstractUser.

To illustrate what that means I draw the following diagram of how the default Django model is implemented:

User Model Diagram

The green circle identified with the label User is actually the one you import from django.contrib.auth.models and that is the implementation that we discussed in this article.

If you look at the source code, its implementation looks like this:

class User(AbstractUser):
    class Meta(AbstractUser.Meta):
        swappable = 'AUTH_USER_MODEL'

So basically it is just an implementation of the AbstractUser. Meaning all the fields and logic are implemented in the abstract class.

It is done that way so we can easily extend the User model by creating a sub-class of the AbstractUser and add other features and fields you like.

But there is a limitation that you can’t override an existing model field. For example, you can re-define the email field to make it mandatory or to change its length.

So extending the AbstractUser class is only useful when you want to modify its methods, add more fields or swap the objects manager.

If you want to remove a field or change how the field is defined, you have to extend the user model from the AbstractBaseUser.

The best strategy to have full control over the user model is creating a new concrete class from the PermissionsMixin and the AbstractBaseUser.

Note that the PermissionsMixin is only necessary if you intend to use the Django admin or the built-in permissions framework. If you are not planning to use it you can leave it out. And in the future if things change you can add the mixin and migrate the model and you are ready to go.

So the implementation strategy looks like this:

Custom User Model Diagram

Now I’m going to show you my go-to implementation. I always use PostgreSQL which, in my opinion, is the best database to use with Django. At least it is the one with most support and features anyway. So I’m going to show an approach that use the PostgreSQL’s CITextExtension. Then I will show some options if you are using other database engines.

For this implementation I always create an app named accounts:

django-admin startapp accounts

Then before adding any code I like to create an empty migration to install the PostgreSQL extensions that we are going to use:

python manage.py makemigrations accounts --empty --name="postgres_extensions"

Inside the migrations directory of the accounts app you will find an empty migration called 0001_postgres_extensions.py.

Modify the file to include the extension installation:

migrations/0001_postgres_extensions.py

from django.contrib.postgres.operations import CITextExtension
from django.db import migrations

class Migration(migrations.Migration):

    dependencies = [
    ]

    operations = [
        CITextExtension()
    ]

Now let’s implement our model. Open the models.py file inside the accounts app.

I always grab the initial code directly from Django’s source on GitHub, copying the AbstractUser implementation, and modify it accordingly:

accounts/models.py

from django.contrib.auth.base_user import AbstractBaseUser
from django.contrib.auth.models import PermissionsMixin, UserManager
from django.contrib.auth.validators import ASCIIUsernameValidator
from django.contrib.postgres.fields import CICharField, CIEmailField
from django.core.mail import send_mail
from django.db import models
from django.utils import timezone
from django.utils.translation import gettext_lazy as _


class CustomUser(AbstractBaseUser, PermissionsMixin):
    username_validator = ASCIIUsernameValidator()

    username = CICharField(
        _("username"),
        max_length=150,
        unique=True,
        help_text=_("Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only."),
        validators=[username_validator],
        error_messages={
            "unique": _("A user with that username already exists."),
        },
    )
    first_name = models.CharField(_("first name"), max_length=150, blank=True)
    last_name = models.CharField(_("last name"), max_length=150, blank=True)
    email = CIEmailField(
        _("email address"),
        unique=True,
        error_messages={
            "unique": _("A user with that email address already exists."),
        },
    )
    is_staff = models.BooleanField(
        _("staff status"),
        default=False,
        help_text=_("Designates whether the user can log into this admin site."),
    )
    is_active = models.BooleanField(
        _("active"),
        default=True,
        help_text=_(
            "Designates whether this user should be treated as active. Unselect this instead of deleting accounts."
        ),
    )
    date_joined = models.DateTimeField(_("date joined"), default=timezone.now)

    objects = UserManager()

    EMAIL_FIELD = "email"
    USERNAME_FIELD = "username"
    REQUIRED_FIELDS = ["email"]

    class Meta:
        verbose_name = _("user")
        verbose_name_plural = _("users")

    def clean(self):
        super().clean()
        self.email = self.__class__.objects.normalize_email(self.email)

    def get_full_name(self):
        """
        Return the first_name plus the last_name, with a space in between.
        """
        full_name = "%s %s" % (self.first_name, self.last_name)
        return full_name.strip()

    def get_short_name(self):
        """Return the short name for the user."""
        return self.first_name

    def email_user(self, subject, message, from_email=None, **kwargs):
        """Send an email to this user."""
        send_mail(subject, message, from_email, [self.email], **kwargs)

Let’s review what we changed here:

  • We switched the username_validator to use ASCIIUsernameValidator
  • The username field now is using CICharField which is not case-sensitive
  • The email field is now mandatory, unique and is using CIEmailField which is not case-sensitive

On the settings module, add the following configuration:

settings.py

AUTH_USER_MODEL = "accounts.CustomUser"

Now we are ready to create our migrations:

python manage.py makemigrations 

Apply the migrations:

python manage.py migrate

And you should get a similar result if you are just creating your project and if there is no other models/apps:

Operations to perform:
  Apply all migrations: accounts, admin, auth, contenttypes, sessions
Running migrations:
  Applying contenttypes.0001_initial... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0001_initial... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying auth.0008_alter_user_username_max_length... OK
  Applying auth.0009_alter_user_last_name_max_length... OK

If you check your database scheme you will see that there is no auth_user table (which is the default one), and now the user is stored on the table accounts_customuser:

Database Scheme

And all the Foreign Keys to the user model will be created pointing to this table. That’s why it is important to do it right in the beginning of your project, before you created the database scheme.

Now you have all the freedom. You can replace the first_name and last_name and use just one field called name. You could remove the username field and identify your User model with the email (then just make sure you change the property USERNAME_FIELD to email).

You can grab the source code on GitHub: customuser

Handling case-insensitive without PostgreSQL

If you are not using PostgreSQL and want to implement case-insensitive authentication and you have direct access to the User model, a nice hack is to create a custom manager for the User model, like this:

accounts/models.py

from django.contrib.auth.models import AbstractUser, UserManager

class CustomUserManager(UserManager):
    def get_by_natural_key(self, username):
        case_insensitive_username_field = '{}__iexact'.format(self.model.USERNAME_FIELD)
        return self.get(**{case_insensitive_username_field: username})

class CustomUser(AbstractBaseUser, PermissionsMixin):
    # all the fields, etc...

    objects = CustomUserManager()

    # meta, methods, etc...

Then you could also sanitize the username field on the clean() method to always save it as lowercase so you don’t have to bother having case variant/conflicting usernames:

def clean(self):
    super().clean()
    self.email = self.__class__.objects.normalize_email(self.email)
    self.username = self.username.lower()

Conclusions

In this tutorial we discussed a few caveats of the default User model implementation and presented a few options to address those issues.

The takeaway message here is: always replace the default User model.

If your project is already in production, don’t panic: there are ways to fix those issues following the recommendations in this post.

I also have two detailed blog posts on how to make the username field case-insensitive and other about how to extend the django user model:

You can also explore the source code presented in this post on GitHub:

27-06-2021

09:33

How to Start a Production-Ready Django Project [Simple is Better Than Complex]

In this tutorial I’m going to show you how I usually start and organize a new Django project nowadays. I’ve tried many different configurations and ways to organize the project, but for the past 4 years or so this has been consistently my go-to setup.

Please note that this is not intended to be a “best practice” guide or to fit every use case. It’s just the way I like to use Django and that’s also the way that I found that allow your project to grow in healthy way.

Index


Premises

Usually those are the premises I take into account when setting up a project:

  • Separation of code and configuration
  • Multiple environments (production, staging, development, local)
  • Local/development environment first
  • Internationalization and localization
  • Testing and documentation
  • Static checks and styling rules
  • Not all apps must be pluggable
  • Debugging and logging

Environments/Modes

Usually I work with three environment dimensions in my code: local, tests and production. I like to see it as a “mode” how I run the project. What dictates which mode I’m running the project is which settings.py I’m currently using.

Local

The local dimension always come first. It is the settings and setup that a developer will use on their local machine.

All the defaults and configurations must be done to attend the local development environment first.

The reason why I like to do it that way is that the project must be as simple as possible for a new hire to clone the repository, run the project and start coding.

The production environment usually will be configured and maintained by experienced developers and by those who are more familiar with the code base itself. And because the deployment should be automated, there is no reason for people being re-creating the production server over and over again. So it is perfectly fine for the production setup require a few extra steps and configuration.

Tests

The tests environment will be also available locally, so developers can test the code and run the static checks.

But the idea of the tests environment is to expose it to a CI environment like Travis CI, Circle CI, AWS Code Pipeline, etc.

It is a simple setup that you can install the project and run all the unit tests.

Production

The production dimension is the real deal. This is the environment that goes live without the testing and debugging utilities.

I also use this “mode” or dimension to run the staging server.

A staging server is where you roll out new features and bug fixes before applying to the production server.

The idea here is that your staging server should run in production mode, and the only difference is going to be your static/media server and database server. And this can be achieved just by changing the configuration to tell what is the database connection string for example.

But the main thing is that you should not have any conditional in your code that checks if it is the production or staging server. The project should run exactly in the same way as in production.


Project Configuration

Right from the beginning it is a good idea to setup a remote version control service. My go-to option is Git on GitHub. Usually I create the remote repository first then clone it on my local machine to get started.

Let’s say our project is called simple, after creating the repository on GitHub I will create a directory named simple on my local machine, then within the simple directory I will clone the repository, like shown on the structure below:

simple/
└── simple/  (git repo)

Then I create the virtualenv outside of the Git repository:

simple/
├── simple/
└── venv/

Then alongside the simple and venv directories I may place some other support files related to the project which I do not plan to commit to the Git repository.

The reason I do that is because it is more convenient to destroy and re-create/re-clone both the virtual environment or the repository itself.

It is also good to store your virtual environment outside of the git repository/project root so you don’t need to bother ignoring its path when using libs like flake8, isort, black, tox, etc.

You can also use tools like virtualenvwrapper to manage your virtual environments, but I prefer doing it that way because everything is in one place. And if I no longer need to keep a given project on my local machine, I can delete it completely without leaving behind anything related to the project on my machine.

The next step is installing Django inside the virtualenv so we can use the django-admin commands.

source venv/bin/activate
pip install django

Inside the simple directory (where the git repository was cloned) start a new project:

django-admin startproject simple .

Attention to the . in the end of the command. It is necessary to not create yet another directory called simple.

So now the structure should be something like this:

simple/                   <- (1) Wrapper directory with all project contents including the venv
├── simple/               <- (2) Project root and git repository
│   ├── .git/
│   ├── manage.py
│   └── simple/           <- (3) Project package, apps, templates, static, etc
│       ├── __init__.py
│       ├── asgi.py
│       ├── settings.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

At this point I already complement the project package directory with three extra directories for templates, static and locale.

Both templates and static we are going to manage at a project-level and app-level. Those are refer to the global templates and static files.

The locale is necessary in case you are using i18n to translate your application to other languages. So here is where you are going to store the .mo and .po files.

So the structure now should be something like this:

simple/
├── simple/
│   ├── .git/
│   ├── manage.py
│   └── simple/
│       ├── locale/
│       ├── static/
│       ├── templates/
│       ├── __init__.py
│       ├── asgi.py
│       ├── settings.py
│       ├── urls.py
│       └── wsgi.py
└── venv/
Requirements

Inside the project root (2) I like to create a directory called requirements with all the .txt files, breaking down the project dependencies like this:

  • base.txt: Main dependencies, strictly necessary to make the project run. Common to all environments
  • tests.txt: Inherits from base.txt + test utilities
  • local.txt: Inherits from tests.txt + development utilities
  • production.txt: Inherits from base.txt + production only dependencies

Note that I do not have a staging.txt requirements file, that’s because the staging environment is going to use the production.txt requirements so we have an exact copy of the production environment.

simple/
├── simple/
│   ├── .git/
│   ├── manage.py
│   ├── requirements/
│   │   ├── base.txt
│   │   ├── local.txt
│   │   ├── production.txt
│   │   └── tests.txt
│   └── simple/
│       ├── locale/
│       ├── static/
│       ├── templates/
│       ├── __init__.py
│       ├── asgi.py
│       ├── settings.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

Now let’s have a look inside each of those requirements file and what are the python libraries that I always use no matter what type of Django project I’m developing.

base.txt

dj-database-url==0.5.0
Django==3.2.4
psycopg2-binary==2.9.1
python-decouple==3.4
pytz==2021.1
  • dj-database-url: This is a very handy Django library to create an one line database connection string which is convenient for storing in .env files in a safe way
  • Django: Django itself
  • psycopg2-binary: PostgreSQL is my go-to database when working with Django. So I always have it here for all my environments
  • python-decouple: A typed environment variable manager to help protect sensitive data that goes to your settings.py module. It also helps with decoupling configuration from source code
  • pytz: For timezone aware datetime fields

tests.txt

-r base.txt

black==21.6b0
coverage==5.5
factory-boy==3.2.0
flake8==3.9.2
isort==5.9.1
tox==3.23.1

The -r base.txt inherits all the requirements defined in the base.txt file

  • black: A Python auto-formatter so you don’t have to bother with styling and formatting your code. It let you focus on what really matters while coding and doing code reviews
  • coverage: Lib to generate test coverage reports of your project
  • factory-boy: A model factory to help you setup complex test cases where the code you are testing rely on multiple models being set in a certain way
  • flake8: Checks for code complexity, PEPs, formatting rules, etc
  • isort: Auto-formatter for your imports so all imports are organized by blocks (standard library, Django, third-party, first-party, etc)
  • tox: I use tox as an interface for CI tools to run all code checks and unit tests

local.txt

-r tests.txt

django-debug-toolbar==3.2.1
ipython==7.25.0

The -r tests.txt inherits all the requirements defined in the base.txt and tests.txt file

  • django-debug-toolbar: 99% of the time I use it to debug the query count on complex views so you can optimize your database access
  • ipython: Improved Python shell. I use it all the time during the development phase to start some implementation or to inspect code

production.txt

-r base.txt

gunicorn==20.1.0
sentry-sdk==1.1.0

The -r base.txt inherits all the requirements defined in the base.txt file

  • gunicorn: A Python WSGI HTTP server for production used behind a proxy server like Nginx
  • sentry-sdk: Error reporting/logging tool to catch exceptions raised in production
Settings

Also following the environments and modes premise I like to setup multiple settings modules. Those are going to serve as the entry point to determine in which mode I’m running the project.

Inside the simple project package, I create a new directory called settings and break down the files like this:

simple/                       (1)
├── simple/                   (2)
│   ├── .git/
│   ├── manage.py
│   ├── requirements/
│   │   ├── base.txt
│   │   ├── local.txt
│   │   ├── production.txt
│   │   └── tests.txt
│   └── simple/              (3)
│       ├── locale/
│       ├── settings/
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── local.py
│       │   ├── production.py
│       │   └── tests.py
│       ├── static/
│       ├── templates/
│       ├── __init__.py
│       ├── asgi.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

Note that I removed the settings.py that used to live inside the simple/ (3) directory.

The majority of the code will live inside the base.py settings module.

Everything that we can set only once in the base.py and change its value using python-decouple we should keep in the base.py and never repeat/override in the other settings modules.

After the removal of the main settings.py a nice touch is to modify the manage.py file to set the local.py as the default settings module so we can still run commands like python manage.py runserver without any further parameters:

manage.py

#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys


def main():
    """Run administrative tasks."""
    os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'simple.settings.local')  # <- here!
    try:
        from django.core.management import execute_from_command_line
    except ImportError as exc:
        raise ImportError(
            "Couldn't import Django. Are you sure it's installed and "
            "available on your PYTHONPATH environment variable? Did you "
            "forget to activate a virtual environment?"
        ) from exc
    execute_from_command_line(sys.argv)


if __name__ == '__main__':
    main()

Now let’s have a look on each of those settings modules.

base.py

scroll to see all the file contents
from pathlib import Path

import dj_database_url
from decouple import Csv, config

BASE_DIR = Path(__file__).resolve().parent.parent


# ==============================================================================
# CORE SETTINGS
# ==============================================================================

SECRET_KEY = config("SECRET_KEY", default="django-insecure$simple.settings.local")

DEBUG = config("DEBUG", default=True, cast=bool)

ALLOWED_HOSTS = config("ALLOWED_HOSTS", default="127.0.0.1,localhost", cast=Csv())

INSTALLED_APPS = [
    "django.contrib.admin",
    "django.contrib.auth",
    "django.contrib.contenttypes",
    "django.contrib.sessions",
    "django.contrib.messages",
    "django.contrib.staticfiles",
]

DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"

ROOT_URLCONF = "simple.urls"

INTERNAL_IPS = ["127.0.0.1"]

WSGI_APPLICATION = "simple.wsgi.application"


# ==============================================================================
# MIDDLEWARE SETTINGS
# ==============================================================================

MIDDLEWARE = [
    "django.middleware.security.SecurityMiddleware",
    "django.contrib.sessions.middleware.SessionMiddleware",
    "django.middleware.common.CommonMiddleware",
    "django.middleware.csrf.CsrfViewMiddleware",
    "django.contrib.auth.middleware.AuthenticationMiddleware",
    "django.contrib.messages.middleware.MessageMiddleware",
    "django.middleware.clickjacking.XFrameOptionsMiddleware",
]


# ==============================================================================
# TEMPLATES SETTINGS
# ==============================================================================

TEMPLATES = [
    {
        "BACKEND": "django.template.backends.django.DjangoTemplates",
        "DIRS": [BASE_DIR / "templates"],
        "APP_DIRS": True,
        "OPTIONS": {
            "context_processors": [
                "django.template.context_processors.debug",
                "django.template.context_processors.request",
                "django.contrib.auth.context_processors.auth",
                "django.contrib.messages.context_processors.messages",
            ],
        },
    },
]


# ==============================================================================
# DATABASES SETTINGS
# ==============================================================================

DATABASES = {
    "default": dj_database_url.config(
        default=config("DATABASE_URL", default="postgres://simple:simple@localhost:5432/simple"),
        conn_max_age=600,
    )
}


# ==============================================================================
# AUTHENTICATION AND AUTHORIZATION SETTINGS
# ==============================================================================

AUTH_PASSWORD_VALIDATORS = [
    {
        "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
    },
    {
        "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
    },
    {
        "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
    },
    {
        "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
    },
]


# ==============================================================================
# I18N AND L10N SETTINGS
# ==============================================================================

LANGUAGE_CODE = config("LANGUAGE_CODE", default="en-us")

TIME_ZONE = config("TIME_ZONE", default="UTC")

USE_I18N = True

USE_L10N = True

USE_TZ = True

LOCALE_PATHS = [BASE_DIR / "locale"]


# ==============================================================================
# STATIC FILES SETTINGS
# ==============================================================================

STATIC_URL = "/static/"

STATIC_ROOT = BASE_DIR.parent.parent / "static"

STATICFILES_DIRS = [BASE_DIR / "static"]

STATICFILES_FINDERS = (
    "django.contrib.staticfiles.finders.FileSystemFinder",
    "django.contrib.staticfiles.finders.AppDirectoriesFinder",
)


# ==============================================================================
# MEDIA FILES SETTINGS
# ==============================================================================

MEDIA_URL = "/media/"

MEDIA_ROOT = BASE_DIR.parent.parent / "media"



# ==============================================================================
# THIRD-PARTY SETTINGS
# ==============================================================================


# ==============================================================================
# FIRST-PARTY SETTINGS
# ==============================================================================

SIMPLE_ENVIRONMENT = config("SIMPLE_ENVIRONMENT", default="local")

A few comments on the overall base settings file contents:

  • The config() are from the python-decouple library. It is exposing the configuration to an environment variable and retrieving its value accordingly to the expected data type. Read more about python-decouple on this guide: How to Use Python Decouple
  • See how configurations like SECRET_KEY, DEBUG and ALLOWED_HOSTS defaults to local/development environment values. That means a new developer won’t need to set a local .env and provide some initial value to run locally
  • On the database settings block we are using the dj_database_url to translate this one line string to a Python dictionary as Django expects
  • Note that how on the MEDIA_ROOT we are navigating two directories up to create a media directory outside the git repository but inside our project workspace (inside the directory simple/ (1)). So everything is handy and we won’t be committing test uploads to our repository
  • In the end of the base.py settings I reserve two blocks for third-party Django libraries that I may install, such as Django Rest Framework or Django Crispy Forms. And the first-party settings refer to custom settings that I may create exclusively for our project. Usually I will prefix them with the project name, like SIMPLE_XXX

local.py

# flake8: noqa

from .base import *

INSTALLED_APPS += ["debug_toolbar"]

MIDDLEWARE.insert(0, "debug_toolbar.middleware.DebugToolbarMiddleware")


# ==============================================================================
# EMAIL SETTINGS
# ==============================================================================

EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"

Here is where I will setup Django Debug Toolbar for example. Or set the email backend to display the sent emails on console instead of having to setup a valid email server to work on the project.

All the code that is only relevant for the development process goes here.

You can use it to setup other libs like Django Silk to run profiling without exposing it to production.

tests.py

# flake8: noqa

from .base import *

PASSWORD_HASHERS = ["django.contrib.auth.hashers.MD5PasswordHasher"]


class DisableMigrations:
    def __contains__(self, item):
        return True

    def __getitem__(self, item):
        return None


MIGRATION_MODULES = DisableMigrations()

Here I add configurations that help us run the test cases faster. Sometimes disabling the migrations may not work if you have interdependencies between the apps models so Django may fail to create a database without the migrations.

In some projects it is better to keep the test database after the execution.

production.py

# flake8: noqa

import sentry_sdk
from sentry_sdk.integrations.django import DjangoIntegration

import simple
from .base import *

# ==============================================================================
# SECURITY SETTINGS
# ==============================================================================

CSRF_COOKIE_SECURE = True
CSRF_COOKIE_HTTPONLY = True

SECURE_HSTS_SECONDS = 60 * 60 * 24 * 7 * 52  # one year
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_SSL_REDIRECT = True
SECURE_BROWSER_XSS_FILTER = True
SECURE_CONTENT_TYPE_NOSNIFF = True
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")

SESSION_COOKIE_SECURE = True


# ==============================================================================
# THIRD-PARTY APPS SETTINGS
# ==============================================================================

sentry_sdk.init(
    dsn=config("SENTRY_DSN", default=""),
    environment=SIMPLE_ENVIRONMENT,
    release="simple@%s" % simple.__version__,
    integrations=[DjangoIntegration()],
)

The most important part here on the production settings is to enable all the security settings Django offer. I like to do it that way because you can’t run the development server with most of those configurations on.

The other thing is the Sentry configuration.

Note the simple.__version__ on the release. Next we are going to explore how I usually manage the version of the project.

Version

I like to reuse Django’s get_version utility for a simple and PEP 440 complaint version identification.

Inside the project’s __init__.py module:

simple/
├── simple/
│   ├── .git/
│   ├── manage.py
│   ├── requirements/
│   └── simple/
│       ├── locale/
│       ├── settings/
│       ├── static/
│       ├── templates/
│       ├── __init__.py     <-- here!
│       ├── asgi.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

You can do something like this:

from django import get_version

VERSION = (1, 0, 0, "final", 0)

__version__ = get_version(VERSION)

The only down side of using the get_version directly from the Django module is that it won’t be able to resolve the git hash for alpha versions.

A possible solution is making a copy of the django/utils/version.py file to your project, and then you import it locally, so it will be able to identify your git repository within the project folder.

But it also depends what kind of versioning you are using for your project. If the version of your project is not really relevant to the end user and you want to keep track of it for internal management like to identify the release on a Sentry issue, you could use a date-based release versioning.


Apps Configuration

A Django app is a Python package that you “install” using the INSTALLED_APPS in your settings file. An app can live pretty much anywhere: inside or outside the project package or even in a library that you installed using pip.

Indeed, your Django apps may be reusable on other projects. But that doesn’t mean it should. Don’t let it destroy your project design or don’t get obsessed over it. Also, it shouldn’t necessarily represent a “part” of your website/web application.

It is perfectly fine for some apps to not have models, or other apps have only views. Some of your modules doesn’t even need to be a Django app at all. I like to see my Django projects as a big Python package and organize it in a way that makes sense, and not try to place everything inside reusable apps.

The general recommendation of the official Django documentation is to place your apps in the project root (alongside the manage.py file, identified here in this tutorial by the simple/ (2) folder).

But actually I prefer to create my apps inside the project package (identified in this tutorial by the simple/ (3) folder). I create a module named apps and then inside the apps I create my Django apps. The main reason why is that it creates a nice namespace for the app. It helps you easily identify that a particular import is part of your project. Also this namespace helps when creating logging rules to handle events in a different way.

Here is an example of how I do it:

simple/                      (1)
├── simple/                  (2)
│   ├── .git/
│   ├── manage.py
│   ├── requirements/
│   └── simple/              (3)
│       ├── apps/            <-- here!
│       │   ├── __init__.py
│       │   ├── accounts/
│       │   └── core/
│       ├── locale/
│       ├── settings/
│       ├── static/
│       ├── templates/
│       ├── __init__.py
│       ├── asgi.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

In the example above the folders accounts/ and core/ are Django apps created with the command django-admin startapp.

Those two apps are also always in my project. The accounts app is the one that I use the replace the default Django User model and also the place where I eventually create password reset, account activation, sign ups, etc.

The core app I use for general/global implementations. For example to define a model that will be used across most of the other apps. I try to keep it decoupled from other apps, not importing other apps resources. It usually is a good place to implement general purpose or reusable views and mixins.

Something to pay attention when using this approach is that you need to change the name of the apps configuration, inside the apps.py file of the Django app:

accounts/apps.py

from django.apps import AppConfig

class AccountsConfig(AppConfig):
    default_auto_field = 'django.db.models.BigAutoField'
    name = 'accounts'  # <- this is the default name created by the startapp command

You should rename it like this, to respect the namespace:

from django.apps import AppConfig

class AccountsConfig(AppConfig):
    default_auto_field = 'django.db.models.BigAutoField'
    name = 'simple.apps.accounts'  # <- change to this!

Then on your INSTALLED_APPS you are going to create a reference to your models like this:

INSTALLED_APPS = [
    "django.contrib.admin",
    "django.contrib.auth",
    "django.contrib.contenttypes",
    "django.contrib.sessions",
    "django.contrib.messages",
    "django.contrib.staticfiles",
    
    "simple.apps.accounts",
    "simple.apps.core",
]

The namespace also helps to organize your INSTALLED_APPS making your project apps easily recognizable.

App Structure

This is what my app structure looks like:

simple/                              (1)
├── simple/                          (2)
│   ├── .git/
│   ├── manage.py
│   ├── requirements/
│   └── simple/                      (3)
│       ├── apps/
│       │   ├── accounts/            <- My app structure
│       │   │   ├── migrations/
│       │   │   │   └── __init__.py
│       │   │   ├── static/
│       │   │   │   └── accounts/
│       │   │   ├── templates/
│       │   │   │   └── accounts/
│       │   │   ├── tests/
│       │   │   │   ├── __init__.py
│       │   │   │   └── factories.py
│       │   │   ├── __init__.py
│       │   │   ├── admin.py
│       │   │   ├── apps.py
│       │   │   ├── constants.py
│       │   │   ├── models.py
│       │   │   └── views.py
│       │   ├── core/
│       │   └── __init__.py
│       ├── locale/
│       ├── settings/
│       ├── static/
│       ├── templates/
│       ├── __init__.py
│       ├── asgi.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

The first thing I do is create a folder named tests so I can break down my tests into several files. I always add a factories.py to create my model factories using the factory-boy library.

For both static and templates always create first a directory with the same name as the app to avoid name collisions when Django collect all static files and try to resolve the templates.

The admin.py may be there or not depending if I’m using the Django Admin contrib app.

Other common modules that you may have is a utils.py, forms.py, managers.py, services.py etc.


Code style and formatting

Now I’m going to show you the configuration that I use for tools like isort, black, flake8, coverage and tox.

Editor Config

The .editorconfig file is a standard recognized by all major IDEs and code editors. It helps the editor understand what is the file formatting rules used in the project.

It tells the editor if the project is indented with tabs or spaces. How many spaces/tabs. What’s the max length for a line of code.

I like to use Django’s .editorconfig file. Here is what it looks like:

.editorconfig

# https://editorconfig.org/

root = true

[*]
indent_style = space
indent_size = 4
insert_final_newline = true
trim_trailing_whitespace = true
end_of_line = lf
charset = utf-8

# Docstrings and comments use max_line_length = 79
[*.py]
max_line_length = 119

# Use 2 spaces for the HTML files
[*.html]
indent_size = 2

# The JSON files contain newlines inconsistently
[*.json]
indent_size = 2
insert_final_newline = ignore

[**/admin/js/vendor/**]
indent_style = ignore
indent_size = ignore

# Minified JavaScript files shouldn't be changed
[**.min.js]
indent_style = ignore
insert_final_newline = ignore

# Makefiles always use tabs for indentation
[Makefile]
indent_style = tab

# Batch files use tabs for indentation
[*.bat]
indent_style = tab

[docs/**.txt]
max_line_length = 79

[*.yml]
indent_size = 2
Flake8

Flake8 is a Python library that wraps PyFlakes, pycodestyle and Ned Batchelder’s McCabe script. It is a great toolkit for checking your code base against coding style (PEP8), programming errors (like “library imported but unused” and “Undefined name”) and to check cyclomatic complexity.

To learn more about flake8, check this tutorial I posted a while a go: How to Use Flake8.

setup.cfg

[flake8]
exclude = .git,.tox,*/migrations/*
max-line-length = 119
isort

isort is a Python utility / library to sort imports alphabetically, and automatically separated into sections.

To learn more about isort, check this tutorial I posted a while a go: How to Use Python isort Library.

setup.cfg

[isort]
force_grid_wrap = 0
use_parentheses = true
combine_as_imports = true
include_trailing_comma = true
line_length = 119
multi_line_output = 3
skip = migrations
default_section = THIRDPARTY
known_first_party = simple
known_django = django
sections=FUTURE,STDLIB,DJANGO,THIRDPARTY,FIRSTPARTY,LOCALFOLDER

Pay attention to the known_first_party, it should be the name of your project so isort can group your project’s imports.

Black

Black is a life changing library to auto-format your Python applications. There is no way I’m coding with Python nowadays without using Black.

Here is the basic configuration that I use:

pyproject.toml

[tool.black]
line-length = 119
target-version = ['py38']
include = '\.pyi?$'
exclude = '''
  /(
      \.eggs
    | \.git
    | \.hg
    | \.mypy_cache
    | \.tox
    | \.venv
    | _build
    | buck-out
    | build
    | dist
    | migrations
  )/
'''

Conclusions

In this tutorial I described my go-to project setup when working with Django. That’s pretty much how I start all my projects nowadays.

Here is the final project structure for reference:

simple/
├── simple/
│   ├── .git/
│   ├── .gitignore
│   ├── .editorconfig
│   ├── manage.py
│   ├── pyproject.toml
│   ├── requirements/
│   │   ├── base.txt
│   │   ├── local.txt
│   │   ├── production.txt
│   │   └── tests.txt
│   ├── setup.cfg
│   └── simple/
│       ├── __init__.py
│       ├── apps/
│       │   ├── accounts/
│       │   │   ├── migrations/
│       │   │   │   └── __init__.py
│       │   │   ├── static/
│       │   │   │   └── accounts/
│       │   │   ├── templates/
│       │   │   │   └── accounts/
│       │   │   ├── tests/
│       │   │   │   ├── __init__.py
│       │   │   │   └── factories.py
│       │   │   ├── __init__.py
│       │   │   ├── admin.py
│       │   │   ├── apps.py
│       │   │   ├── constants.py
│       │   │   ├── models.py
│       │   │   └── views.py
│       │   ├── core/
│       │   │   ├── migrations/
│       │   │   │   └── __init__.py
│       │   │   ├── static/
│       │   │   │   └── core/
│       │   │   ├── templates/
│       │   │   │   └── core/
│       │   │   ├── tests/
│       │   │   │   ├── __init__.py
│       │   │   │   └── factories.py
│       │   │   ├── __init__.py
│       │   │   ├── admin.py
│       │   │   ├── apps.py
│       │   │   ├── constants.py
│       │   │   ├── models.py
│       │   │   └── views.py
│       │   └── __init__.py
│       ├── locale/
│       ├── settings/
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── local.py
│       │   ├── production.py
│       │   └── tests.py
│       ├── static/
│       ├── templates/
│       ├── asgi.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

You can also explore the code on GitHub: django-production-template.

04-03-2021

18:25

Zo installeer je Chrome OS op je (oude) computer [Laatste Artikelen - Webwereld]

Google timmert al jaren hard aan de weg met Chrome OS en brengt samen met verschillende computerfabrikanten Chrome-apparaten uit met dat besturingssysteem. Maar je hoeft niet per se een dedicated apparaat aan te schaffen, je kan het systeem ook zelf op je (oude) computer zetten en wij laten je zien hoe.

29-01-2021

12:47

How to Use Chart.js with Django [Simple is Better Than Complex]

Chart.js is a cool open source JavaScript library that helps you render HTML5 charts. It is responsive and counts with 8 different chart types.

In this tutorial we are going to explore a little bit of how to make Django talk with Chart.js and render some simple charts based on data extracted from our models.

Installation

For this tutorial all you are going to do is add the Chart.js lib to your HTML page:

<script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>

You can download it from Chart.js official website and use it locally, or you can use it from a CDN using the URL above.

Example Scenario

I’m going to use the same example I used for the tutorial How to Create Group By Queries With Django ORM which is a good complement to this tutorial because actually the tricky part of working with charts is to transform the data so it can fit in a bar chart / line chart / etc.

We are going to use the two models below, Country and City:

class Country(models.Model):
    name = models.CharField(max_length=30)

class City(models.Model):
    name = models.CharField(max_length=30)
    country = models.ForeignKey(Country, on_delete=models.CASCADE)
    population = models.PositiveIntegerField()

And the raw data stored in the database:

cities
id name country_id population
1Tokyo2836,923,000
2Shanghai1334,000,000
3Jakarta1930,000,000
4Seoul2125,514,000
5Guangzhou1325,000,000
6Beijing1324,900,000
7Karachi2224,300,000
8Shenzhen1323,300,000
9Delhi2521,753,486
10Mexico City2421,339,781
11Lagos921,000,000
12São Paulo120,935,204
13Mumbai2520,748,395
14New York City2020,092,883
15Osaka2819,342,000
16Wuhan1319,000,000
17Chengdu1318,100,000
18Dhaka417,151,925
19Chongqing1317,000,000
20Tianjin1315,400,000
21Kolkata2514,617,882
22Tehran1114,595,904
23Istanbul214,377,018
24London2614,031,830
25Hangzhou1313,400,000
26Los Angeles2013,262,220
27Buenos Aires813,074,000
28Xi'an1312,900,000
29Paris612,405,426
30Changzhou1312,400,000
31Shantou1312,000,000
32Rio de Janeiro111,973,505
33Manila1811,855,975
34Nanjing1311,700,000
35Rhine-Ruhr1611,470,000
36Jinan1311,000,000
37Bangalore2510,576,167
38Harbin1310,500,000
39Lima79,886,647
40Zhengzhou139,700,000
41Qingdao139,600,000
42Chicago209,554,598
43Nagoya289,107,000
44Chennai258,917,749
45Bangkok158,305,218
46Bogotá277,878,783
47Hyderabad257,749,334
48Shenyang137,700,000
49Wenzhou137,600,000
50Nanchang137,400,000
51Hong Kong137,298,600
52Taipei297,045,488
53Dallas–Fort Worth206,954,330
54Santiago146,683,852
55Luanda236,542,944
56Houston206,490,180
57Madrid176,378,297
58Ahmedabad256,352,254
59Toronto56,055,724
60Philadelphia206,051,170
61Washington, D.C.206,033,737
62Miami205,929,819
63Belo Horizonte15,767,414
64Atlanta205,614,323
65Singapore125,535,000
66Barcelona175,445,616
67Munich165,203,738
68Stuttgart165,200,000
69Ankara25,150,072
70Hamburg165,100,000
71Pune255,049,968
72Berlin165,005,216
73Guadalajara244,796,050
74Boston204,732,161
75Sydney105,000,500
76San Francisco204,594,060
77Surat254,585,367
78Phoenix204,489,109
79Monterrey244,477,614
80Inland Empire204,441,890
81Rome34,321,244
82Detroit204,296,611
83Milan34,267,946
84Melbourne104,650,000
countries
id name
1Brazil
2Turkey
3Italy
4Bangladesh
5Canada
6France
7Peru
8Argentina
9Nigeria
10Australia
11Iran
12Singapore
13China
14Chile
15Thailand
16Germany
17Spain
18Philippines
19Indonesia
20United States
21South Korea
22Pakistan
23Angola
24Mexico
25India
26United Kingdom
27Colombia
28Japan
29Taiwan

Example 1: Pie Chart

For the first example we are only going to retrieve the top 5 most populous cities and render it as a pie chart. In this strategy we are going to return the chart data as part of the view context and inject the results in the JavaScript code using the Django Template language.

views.py

from django.shortcuts import render
from mysite.core.models import City

def pie_chart(request):
    labels = []
    data = []

    queryset = City.objects.order_by('-population')[:5]
    for city in queryset:
        labels.append(city.name)
        data.append(city.population)

    return render(request, 'pie_chart.html', {
        'labels': labels,
        'data': data,
    })

Basically in the view above we are iterating through the City queryset and building a list of labels and a list of data. Here in this case the data is the population count saved in the City model.

For the urls.py just a simple routing:

urls.py

from django.urls import path
from mysite.core import views

urlpatterns = [
    path('pie-chart/', views.pie_chart, name='pie-chart'),
]

Now the template. I got a basic snippet from the Chart.js Pie Chart Documentation.

pie_chart.html

{% extends 'base.html' %}

{% block content %}
  <div id="container" style="width: 75%;">
    <canvas id="pie-chart"></canvas>
  </div>

  <script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>
  <script>

    var config = {
      type: 'pie',
      data: {
        datasets: [{
          data: {{ data|safe }},
          backgroundColor: [
            '#696969', '#808080', '#A9A9A9', '#C0C0C0', '#D3D3D3'
          ],
          label: 'Population'
        }],
        labels: {{ labels|safe }}
      },
      options: {
        responsive: true
      }
    };

    window.onload = function() {
      var ctx = document.getElementById('pie-chart').getContext('2d');
      window.myPie = new Chart(ctx, config);
    };

  </script>

{% endblock %}

In the example above the base.html template is not important but you can see it in the code example I shared in the end of this post.

This strategy is not ideal but works fine. The bad thing is that we are using the Django Template Language to interfere with the JavaScript logic. When we put {{ data|safe}} we are injecting a variable that came from the server directly in the JavaScript code.

The code above looks like this:

Pie Chart


Example 2: Bar Chart with Ajax

As the title says, we are now going to render a bar chart using an async call.

views.py

from django.shortcuts import render
from django.db.models import Sum
from django.http import JsonResponse
from mysite.core.models import City

def home(request):
    return render(request, 'home.html')

def population_chart(request):
    labels = []
    data = []

    queryset = City.objects.values('country__name').annotate(country_population=Sum('population')).order_by('-country_population')
    for entry in queryset:
        labels.append(entry['country__name'])
        data.append(entry['country_population'])
    
    return JsonResponse(data={
        'labels': labels,
        'data': data,
    })

So here we are using two views. The home view would be the main page where the chart would be loaded at. The other view population_chart would be the one with the sole responsibility to aggregate the data the return a JSON response with the labels and data.

If you are wondering about what this queryset is doing, it is grouping the cities by the country and aggregating the total population of each country. The result is going to be a list of country + total population. To learn more about this kind of query have a look on this post: How to Create Group By Queries With Django ORM

urls.py

from django.urls import path
from mysite.core import views

urlpatterns = [
    path('', views.home, name='home'),
    path('population-chart/', views.population_chart, name='population-chart'),
]

home.html

{% extends 'base.html' %}

{% block content %}

  <div id="container" style="width: 75%;">
    <canvas id="population-chart" data-url="{% url 'population-chart' %}"></canvas>
  </div>

  <script src="https://code.jquery.com/jquery-3.4.1.min.js"></script>
  <script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>
  <script>

    $(function () {

      var $populationChart = $("#population-chart");
      $.ajax({
        url: $populationChart.data("url"),
        success: function (data) {

          var ctx = $populationChart[0].getContext("2d");

          new Chart(ctx, {
            type: 'bar',
            data: {
              labels: data.labels,
              datasets: [{
                label: 'Population',
                backgroundColor: 'blue',
                data: data.data
              }]          
            },
            options: {
              responsive: true,
              legend: {
                position: 'top',
              },
              title: {
                display: true,
                text: 'Population Bar Chart'
              }
            }
          });

        }
      });

    });

  </script>

{% endblock %}

Now we have a better separation of concerns. Looking at the chart container:

<canvas id="population-chart" data-url="{% url 'population-chart' %}"></canvas>

We added a reference to the URL that holds the chart rendering logic. Later on we are using it to execute the Ajax call.

var $populationChart = $("#population-chart");
$.ajax({
  url: $populationChart.data("url"),
  success: function (data) {
    // ...
  }
});

Inside the success callback we then finally execute the Chart.js related code using the JsonResponse data.

Bar Chart


Conclusions

I hope this tutorial helped you to get started with working with charts using Chart.js. I published another tutorial on the same subject a while ago but using the Highcharts library. The approach is pretty much the same: How to Integrate Highcharts.js with Django.

If you want to grab the code I used in this tutorial you can find it here: github.com/sibtc/django-chartjs-example.

How to Save Extra Data to a Django REST Framework Serializer [Simple is Better Than Complex]

In this tutorial you are going to learn how to pass extra data to your serializer, before saving it to the database.

Introduction

When using regular Django forms, there is this common pattern where we save the form with commit=False and then pass some extra data to the instance before saving it to the database, like this:

form = InvoiceForm(request.POST)
if form.is_valid():
    invoice = form.save(commit=False)
    invoice.user = request.user
    invoice.save()

This is very useful because we can save the required information using only one database query and it also make it possible to handle not nullable columns that was not defined in the form.

To simulate this pattern using a Django REST Framework serializer you can do something like this:

serializer = InvoiceSerializer(data=request.data)
if serializer.is_valid():
    serializer.save(user=request.user)

You can also pass several parameters at once:

serializer = InvoiceSerializer(data=request.data)
if serializer.is_valid():
    serializer.save(user=request.user, date=timezone.now(), status='sent')

Example Using APIView

In this example I created an app named core.

models.py

from django.contrib.auth.models import User
from django.db import models

class Invoice(models.Model):
    SENT = 1
    PAID = 2
    VOID = 3
    STATUS_CHOICES = (
        (SENT, 'sent'),
        (PAID, 'paid'),
        (VOID, 'void'),
    )

    user = models.ForeignKey(User, on_delete=models.CASCADE, related_name='invoices')
    number = models.CharField(max_length=30)
    date = models.DateTimeField(auto_now_add=True)
    status = models.PositiveSmallIntegerField(choices=STATUS_CHOICES)
    amount = models.DecimalField(max_digits=10, decimal_places=2)

serializers.py

from rest_framework import serializers
from core.models import Invoice

class InvoiceSerializer(serializers.ModelSerializer):
    class Meta:
        model = Invoice
        fields = ('number', 'amount')

views.py

from rest_framework import status
from rest_framework.response import Response
from rest_framework.views import APIView
from core.models import Invoice
from core.serializers import InvoiceSerializer

class InvoiceAPIView(APIView):
    def post(self, request):
        serializer = InvoiceSerializer(data=request.data)
        serializer.is_valid(raise_exception=True)
        serializer.save(user=request.user, status=Invoice.SENT)
        return Response(status=status.HTTP_201_CREATED)

Example Using ViewSet

Very similar example, using the same models.py and serializers.py as in the previous example.

views.py

from rest_framework.viewsets import ModelViewSet
from core.models import Invoice
from core.serializers import InvoiceSerializer

class InvoiceViewSet(ModelViewSet):
    queryset = Invoice.objects.all()
    serializer_class = InvoiceSerializer

    def perform_create(self, serializer):
        serializer.save(user=self.request.user, status=Invoice.SENT)

How to Use Date Picker with Django [Simple is Better Than Complex]

In this tutorial we are going to explore three date/datetime pickers options that you can easily use in a Django project. We are going to explore how to do it manually first, then how to set up a custom widget and finally how to use a third-party Django app with support to datetime pickers.


Introduction

The implementation of a date picker is mostly done on the front-end.

The key part of the implementation is to assure Django will receive the date input value in the correct format, and also that Django will be able to reproduce the format when rendering a form with initial data.

We can also use custom widgets to provide a deeper integration between the front-end and back-end and also to promote better reuse throughout a project.

In the next sections we are going to explore following date pickers:

Tempus Dominus Bootstrap 4 Docs Source

Tempus Dominus Bootstrap 4

XDSoft DateTimePicker Docs Source

XDSoft DateTimePicker

Fengyuan Chen’s Datepicker Docs Source

Fengyuan Chen's Datepicker


Tempus Dominus Bootstrap 4

Docs Source

This is a great JavaScript library and it integrate well with Bootstrap 4. The downside is that it requires moment.js and sort of need Font-Awesome for the icons.

It only make sense to use this library with you are already using Bootstrap 4 + jQuery, otherwise the list of CSS and JS may look a little bit overwhelming.

To install it you can use their CDN or download the latest release from their GitHub Releases page.

If you downloaded the code from the releases page, grab the processed code from the build/ folder.

Below, a static HTML example of the datepicker:

<!doctype html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
    <title>Static Example</title>

    <!-- Bootstrap 4 -->
    <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/css/bootstrap.min.css" integrity="sha384-GJzZqFGwb1QTTN6wy59ffF1BuGJpLSa9DkKMp0DgiMDm4iYMj70gZWKYbI706tWS" crossorigin="anonymous">
    <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.6/umd/popper.min.js" integrity="sha384-wHAiFfRlMFy6i5SRaxvfOCifBUQy1xHdJ/yoi7FRNXMRBu5WHdZYu1hA6ZOblgut" crossorigin="anonymous"></script>
    <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/js/bootstrap.min.js" integrity="sha384-B0UglyR+jN6CkvvICOB2joaf5I4l3gm9GU6Hc1og6Ls7i6U/mkkaduKaBhlAXv9k" crossorigin="anonymous"></script>

    <!-- Font Awesome -->
    <link href="https://stackpath.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css" rel="stylesheet" integrity="sha384-wvfXpqpZZVQGK6TAh5PVlGOfQNHSoD2xbE+QkPxCAFlNEevoEH3Sl0sibVcOQVnN" crossorigin="anonymous">

    <!-- Moment.js -->
    <script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.23.0/moment.min.js" integrity="sha256-VBLiveTKyUZMEzJd6z2mhfxIqz3ZATCuVMawPZGzIfA=" crossorigin="anonymous"></script>

    <!-- Tempus Dominus Bootstrap 4 -->
    <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/tempusdominus-bootstrap-4/5.1.2/css/tempusdominus-bootstrap-4.min.css" integrity="sha256-XPTBwC3SBoWHSmKasAk01c08M6sIA5gF5+sRxqak2Qs=" crossorigin="anonymous" />
    <script src="https://cdnjs.cloudflare.com/ajax/libs/tempusdominus-bootstrap-4/5.1.2/js/tempusdominus-bootstrap-4.min.js" integrity="sha256-z0oKYg6xiLq3yJGsp/LsY9XykbweQlHl42jHv2XTBz4=" crossorigin="anonymous"></script>

  </head>
  <body>

    <div class="input-group date" id="datetimepicker1" data-target-input="nearest">
      <input type="text" class="form-control datetimepicker-input" data-target="#datetimepicker1"/>
      <div class="input-group-append" data-target="#datetimepicker1" data-toggle="datetimepicker">
        <div class="input-group-text"><i class="fa fa-calendar"></i></div>
      </div>
    </div>

    <script>
      $(function () {
        $("#datetimepicker1").datetimepicker();
      });
    </script>

  </body>
</html>
Direct Usage

The challenge now is to have this input snippet integrated with a Django form.

forms.py

from django import forms

class DateForm(forms.Form):
    date = forms.DateTimeField(
        input_formats=['%d/%m/%Y %H:%M'],
        widget=forms.DateTimeInput(attrs={
            'class': 'form-control datetimepicker-input',
            'data-target': '#datetimepicker1'
        })
    )

template

<div class="input-group date" id="datetimepicker1" data-target-input="nearest">
  {{ form.date }}
  <div class="input-group-append" data-target="#datetimepicker1" data-toggle="datetimepicker">
    <div class="input-group-text"><i class="fa fa-calendar"></i></div>
  </div>
</div>

<script>
  $(function () {
    $("#datetimepicker1").datetimepicker({
      format: 'DD/MM/YYYY HH:mm',
    });
  });
</script>

The script tag can be placed anywhere because the snippet $(function () { ... }); will run the datetimepicker initialization when the page is ready. The only requirement is that this script tag is placed after the jQuery script tag.

Custom Widget

You can create the widget in any app you want, here I’m going to consider we have a Django app named core.

core/widgets.py

from django.forms import DateTimeInput

class BootstrapDateTimePickerInput(DateTimeInput):
    template_name = 'widgets/bootstrap_datetimepicker.html'

    def get_context(self, name, value, attrs):
        datetimepicker_id = 'datetimepicker_{name}'.format(name=name)
        if attrs is None:
            attrs = dict()
        attrs['data-target'] = '#{id}'.format(id=datetimepicker_id)
        attrs['class'] = 'form-control datetimepicker-input'
        context = super().get_context(name, value, attrs)
        context['widget']['datetimepicker_id'] = datetimepicker_id
        return context

In the implementation above we generate a unique ID datetimepicker_id and also include it in the widget context.

Then the front-end implementation is done inside the widget HTML snippet.

widgets/bootstrap_datetimepicker.html

<div class="input-group date" id="{{ widget.datetimepicker_id }}" data-target-input="nearest">
  {% include "django/forms/widgets/input.html" %}
  <div class="input-group-append" data-target="#{{ widget.datetimepicker_id }}" data-toggle="datetimepicker">
    <div class="input-group-text"><i class="fa fa-calendar"></i></div>
  </div>
</div>

<script>
  $(function () {
    $("#{{ widget.datetimepicker_id }}").datetimepicker({
      format: 'DD/MM/YYYY HH:mm',
    });
  });
</script>

Note how we make use of the built-in django/forms/widgets/input.html template.

Now the usage:

core/forms.py

from .widgets import BootstrapDateTimePickerInput

class DateForm(forms.Form):
    date = forms.DateTimeField(
        input_formats=['%d/%m/%Y %H:%M'], 
        widget=BootstrapDateTimePickerInput()
    )

Now simply render the field:

template

{{ form.date }}

The good thing about having the widget is that your form could have several date fields using the widget and you could simply render the whole form like:

<form method="post">
  {% csrf_token %}
  {{ form.as_p }}
  <input type="submit" value="Submit">
</form>

XDSoft DateTimePicker

Docs Source

The XDSoft DateTimePicker is a very versatile date picker and doesn’t rely on moment.js or Bootstrap, although it looks good in a Bootstrap website.

It is easy to use and it is very straightforward.

You can download the source from GitHub releases page.

Below, a static example so you can see the minimum requirements and how all the pieces come together:

<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
  <title>Static Example</title>

  <!-- jQuery -->
  <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>

  <!-- XDSoft DateTimePicker -->
  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/jquery-datetimepicker/2.5.20/jquery.datetimepicker.min.css" integrity="sha256-DOS9W6NR+NFe1fUhEE0PGKY/fubbUCnOfTje2JMDw3Y=" crossorigin="anonymous" />
  <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery-datetimepicker/2.5.20/jquery.datetimepicker.full.min.js" integrity="sha256-FEqEelWI3WouFOo2VWP/uJfs1y8KJ++FLh2Lbqc8SJk=" crossorigin="anonymous"></script>
</head>
<body>

  <input id="datetimepicker" type="text">

  <script>
    $(function () {
      $("#datetimepicker").datetimepicker();
    });
  </script>

</body>
</html>
Direct Usage

A basic integration with Django would look like this:

forms.py

from django import forms

class DateForm(forms.Form):
    date = forms.DateTimeField(input_formats=['%d/%m/%Y %H:%M'])

Simple form, default widget, nothing special.

Now using it on the template:

template

{{ form.date }}

<script>
  $(function () {
    $("#id_date").datetimepicker({
      format: 'd/m/Y H:i',
    });
  });
</script>

The id_date is the default ID Django generates for the form fields (id_ + name).

Custom Widget

core/widgets.py

from django.forms import DateTimeInput

class XDSoftDateTimePickerInput(DateTimeInput):
    template_name = 'widgets/xdsoft_datetimepicker.html'

widgets/xdsoft_datetimepicker.html

{% include "django/forms/widgets/input.html" %}

<script>
  $(function () {
    $("input[name='{{ widget.name }}']").datetimepicker({
      format: 'd/m/Y H:i',
    });
  });
</script>

To have a more generic implementation, this time we are selecting the field to initialize the component using its name instead of its id, should the user change the id prefix.

Now the usage:

core/forms.py

from django import forms
from .widgets import XDSoftDateTimePickerInput

class DateForm(forms.Form):
    date = forms.DateTimeField(
        input_formats=['%d/%m/%Y %H:%M'], 
        widget=XDSoftDateTimePickerInput()
    )

template

{{ form.date }}

Fengyuan Chen’s Datepicker

Docs Source

This is a very beautiful and minimalist date picker. Unfortunately there is no time support. But if you only need dates this is a great choice.

To install this datepicker you can either use their CDN or download the sources from their GitHub releases page. Please note that they do not provide a compiled/processed JavaScript files. But you can download those to your local machine using the CDN.

<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
  <title>Static Example</title>
  <style>body {font-family: Arial, sans-serif;}</style>
  
  <!-- jQuery -->
  <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>

  <!-- Fengyuan Chen's Datepicker -->
  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/datepicker/0.6.5/datepicker.min.css" integrity="sha256-b88RdwbRJEzRx95nCuuva+hO5ExvXXnpX+78h8DjyOE=" crossorigin="anonymous" />
  <script src="https://cdnjs.cloudflare.com/ajax/libs/datepicker/0.6.5/datepicker.min.js" integrity="sha256-/7FLTdzP6CfC1VBAj/rsp3Rinuuu9leMRGd354hvk0k=" crossorigin="anonymous"></script>
</head>
<body>

  <input id="datepicker">

  <script>
    $(function () {
      $("#datepicker").datepicker();
    });
  </script>

</body>
</html>
Direct Usage

A basic integration with Django (note that we are now using DateField instead of DateTimeField):

forms.py

from django import forms

class DateForm(forms.Form):
    date = forms.DateTimeField(input_formats=['%d/%m/%Y %H:%M'])

template

{{ form.date }}

<script>
  $(function () {
    $("#id_date").datepicker({
      format:'dd/mm/yyyy',
    });
  });
</script>
Custom Widget

core/widgets.py

from django.forms import DateInput

class FengyuanChenDatePickerInput(DateInput):
    template_name = 'widgets/fengyuanchen_datepicker.html'

widgets/fengyuanchen_datepicker.html

{% include "django/forms/widgets/input.html" %}

<script>
  $(function () {
    $("input[name='{{ widget.name }}']").datepicker({
      format:'dd/mm/yyyy',
    });
  });
</script>

Usage:

core/forms.py

from django import forms
from .widgets import FengyuanChenDatePickerInput

class DateForm(forms.Form):
    date = forms.DateTimeField(
        input_formats=['%d/%m/%Y %H:%M'], 
        widget=FengyuanChenDatePickerInput()
    )

template

{{ form.date }}

Conclusions

The implementation is very similar no matter what date/datetime picker you are using. Hopefully this tutorial provided some insights on how to integrate this kind of frontend library to a Django project.

As always, the best source of information about each of those libraries are their official documentation.

I also created an example project to show the usage and implementation of the widgets for each of the libraries presented in this tutorial. Grab the source code at github.com/sibtc/django-datetimepicker-example.

How to Implement Grouped Model Choice Field [Simple is Better Than Complex]

The Django forms API have two field types to work with multiple options: ChoiceField and ModelChoiceField.

Both use select input as the default widget and they work in a similar way, except that ModelChoiceField is designed to handle QuerySets and work with foreign key relationships.

A basic implementation using a ChoiceField would be:

class ExpenseForm(forms.Form):
    CHOICES = (
        (11, 'Credit Card'),
        (12, 'Student Loans'),
        (13, 'Taxes'),
        (21, 'Books'),
        (22, 'Games'),
        (31, 'Groceries'),
        (32, 'Restaurants'),
    )
    amount = forms.DecimalField()
    date = forms.DateField()
    category = forms.ChoiceField(choices=CHOICES)
Django ChoiceField

Grouped Choice Field

You can also organize the choices in groups to generate the <optgroup> tags like this:

class ExpenseForm(forms.Form):
    CHOICES = (
        ('Debt', (
            (11, 'Credit Card'),
            (12, 'Student Loans'),
            (13, 'Taxes'),
        )),
        ('Entertainment', (
            (21, 'Books'),
            (22, 'Games'),
        )),
        ('Everyday', (
            (31, 'Groceries'),
            (32, 'Restaurants'),
        )),
    )
    amount = forms.DecimalField()
    date = forms.DateField()
    category = forms.ChoiceField(choices=CHOICES)
Django Grouped ChoiceField

Grouped Model Choice Field

When you are using a ModelChoiceField unfortunately there is no built-in solution.

Recently I found a nice solution on Django’s ticket tracker, where someone proposed adding an opt_group argument to the ModelChoiceField.

While the discussion is still ongoing, Simon Charette proposed a really good solution.

Let’s see how we can integrate it in our project.

First consider the following models:

models.py

from django.db import models

class Category(models.Model):
    name = models.CharField(max_length=30)
    parent = models.ForeignKey('Category', on_delete=models.CASCADE, null=True)

    def __str__(self):
        return self.name

class Expense(models.Model):
    amount = models.DecimalField(max_digits=10, decimal_places=2)
    date = models.DateField()
    category = models.ForeignKey(Category, on_delete=models.CASCADE)

    def __str__(self):
        return self.amount

So now our category instead of being a regular choices field it is now a model and the Expense model have a relationship with it using a foreign key.

If we create a ModelForm using this model, the result will be very similar to our first example.

To simulate a grouped categories you will need the code below. First create a new module named fields.py:

fields.py

from functools import partial
from itertools import groupby
from operator import attrgetter

from django.forms.models import ModelChoiceIterator, ModelChoiceField


class GroupedModelChoiceIterator(ModelChoiceIterator):
    def __init__(self, field, groupby):
        self.groupby = groupby
        super().__init__(field)

    def __iter__(self):
        if self.field.empty_label is not None:
            yield ("", self.field.empty_label)
        queryset = self.queryset
        # Can't use iterator() when queryset uses prefetch_related()
        if not queryset._prefetch_related_lookups:
            queryset = queryset.iterator()
        for group, objs in groupby(queryset, self.groupby):
            yield (group, [self.choice(obj) for obj in objs])


class GroupedModelChoiceField(ModelChoiceField):
    def __init__(self, *args, choices_groupby, **kwargs):
        if isinstance(choices_groupby, str):
            choices_groupby = attrgetter(choices_groupby)
        elif not callable(choices_groupby):
            raise TypeError('choices_groupby must either be a str or a callable accepting a single argument')
        self.iterator = partial(GroupedModelChoiceIterator, groupby=choices_groupby)
        super().__init__(*args, **kwargs)

And here is how you use it in your forms:

forms.py

from django import forms
from .fields import GroupedModelChoiceField
from .models import Category, Expense

class ExpenseForm(forms.ModelForm):
    category = GroupedModelChoiceField(
        queryset=Category.objects.exclude(parent=None), 
        choices_groupby='parent'
    )

    class Meta:
        model = Expense
        fields = ('amount', 'date', 'category')
Django Grouped ModelChoiceField

Because in the example above I used a self-referencing relationship I had to add the exclude(parent=None) to hide the “group categories” from showing up in the select input as a valid option.


Further Reading

You can download the code used in this tutorial from GitHub: github.com/sibtc/django-grouped-choice-field-example

Credits to the solution Simon Charette on Django Ticket Track.

How to Use JWT Authentication with Django REST Framework [Simple is Better Than Complex]

JWT stand for JSON Web Token and it is an authentication strategy used by client/server applications where the client is a Web application using JavaScript and some frontend framework like Angular, React or VueJS.

In this tutorial we are going to explore the specifics of JWT authentication. If you want to learn more about Token-based authentication using Django REST Framework (DRF), or if you want to know how to start a new DRF project you can read this tutorial: How to Implement Token Authentication using Django REST Framework. The concepts are the same, we are just going to switch the authentication backend.


How JWT Works?

The JWT is just an authorization token that should be included in all requests:

curl http://127.0.0.1:8000/hello/ -H 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQzODI4NDMxLCJqdGkiOiI3ZjU5OTdiNzE1MGQ0NjU3OWRjMmI0OTE2NzA5N2U3YiIsInVzZXJfaWQiOjF9.Ju70kdcaHKn1Qaz8H42zrOYk0Jx9kIckTn9Xx7vhikY'

The JWT is acquired by exchanging an username + password for an access token and an refresh token.

The access token is usually short-lived (expires in 5 min or so, can be customized though).

The refresh token lives a little bit longer (expires in 24 hours, also customizable). It is comparable to an authentication session. After it expires, you need a full login with username + password again.

Why is that?

It’s a security feature and also it’s because the JWT holds a little bit more information. If you look closely the example I gave above, you will see the token is composed by three parts:

xxxxx.yyyyy.zzzzz

Those are three distinctive parts that compose a JWT:

header.payload.signature

So we have here:

header = eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9
payload = eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQzODI4NDMxLCJqdGkiOiI3ZjU5OTdiNzE1MGQ0NjU3OWRjMmI0OTE2NzA5N2U3YiIsInVzZXJfaWQiOjF9
signature = Ju70kdcaHKn1Qaz8H42zrOYk0Jx9kIckTn9Xx7vhikY

This information is encoded using Base64. If we decode, we will see something like this:

header

{
  "typ": "JWT",
  "alg": "HS256"
}

payload

{
  "token_type": "access",
  "exp": 1543828431,
  "jti": "7f5997b7150d46579dc2b49167097e7b",
  "user_id": 1
}

signature

The signature is issued by the JWT backend, using the header base64 + payload base64 + SECRET_KEY. Upon each request this signature is verified. If any information in the header or in the payload was changed by the client it will invalidate the signature. The only way of checking and validating the signature is by using your application’s SECRET_KEY. Among other things, that’s why you should always keep your SECRET_KEY secret!


Installation & Setup

For this tutorial we are going to use the djangorestframework_simplejwt library, recommended by the DRF developers.

pip install djangorestframework_simplejwt

settings.py

REST_FRAMEWORK = {
    'DEFAULT_AUTHENTICATION_CLASSES': [
        'rest_framework_simplejwt.authentication.JWTAuthentication',
    ],
}

urls.py

from django.urls import path
from rest_framework_simplejwt import views as jwt_views

urlpatterns = [
    # Your URLs...
    path('api/token/', jwt_views.TokenObtainPairView.as_view(), name='token_obtain_pair'),
    path('api/token/refresh/', jwt_views.TokenRefreshView.as_view(), name='token_refresh'),
]

Example Code

For this tutorial I will use the following route and API view:

views.py

from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated


class HelloView(APIView):
    permission_classes = (IsAuthenticated,)

    def get(self, request):
        content = {'message': 'Hello, World!'}
        return Response(content)

urls.py

from django.urls import path
from myapi.core import views

urlpatterns = [
    path('hello/', views.HelloView.as_view(), name='hello'),
]

Usage

I will be using HTTPie to consume the API endpoints via the terminal. But you can also use cURL (readily available in many OS) to try things out locally.

Or alternatively, use the DRF web interface by accessing the endpoint URLs like this:

DRF JWT Obtain Token

Obtain Token

First step is to authenticate and obtain the token. The endpoint is /api/token/ and it only accepts POST requests.

http post http://127.0.0.1:8000/api/token/ username=vitor password=123

HTTPie JWT Obtain Token

So basically your response body is the two tokens:

{
    "access": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQ1MjI0MjU5LCJqdGkiOiIyYmQ1NjI3MmIzYjI0YjNmOGI1MjJlNThjMzdjMTdlMSIsInVzZXJfaWQiOjF9.D92tTuVi_YcNkJtiLGHtcn6tBcxLCBxz9FKD3qzhUg8",
    "refresh": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoicmVmcmVzaCIsImV4cCI6MTU0NTMxMDM1OSwianRpIjoiMjk2ZDc1ZDA3Nzc2NDE0ZjkxYjhiOTY4MzI4NGRmOTUiLCJ1c2VyX2lkIjoxfQ.rA-mnGRg71NEW_ga0sJoaMODS5ABjE5HnxJDb0F8xAo"
}

After that you are going to store both the access token and the refresh token on the client side, usually in the localStorage.

In order to access the protected views on the backend (i.e., the API endpoints that require authentication), you should include the access token in the header of all requests, like this:

http http://127.0.0.1:8000/hello/ "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQ1MjI0MjAwLCJqdGkiOiJlMGQxZDY2MjE5ODc0ZTY3OWY0NjM0ZWU2NTQ2YTIwMCIsInVzZXJfaWQiOjF9.9eHat3CvRQYnb5EdcgYFzUyMobXzxlAVh_IAgqyvzCE"

HTTPie JWT Hello, World!

You can use this access token for the next five minutes.

After five min, the token will expire, and if you try to access the view again, you are going to get the following error:

http http://127.0.0.1:8000/hello/ "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQ1MjI0MjAwLCJqdGkiOiJlMGQxZDY2MjE5ODc0ZTY3OWY0NjM0ZWU2NTQ2YTIwMCIsInVzZXJfaWQiOjF9.9eHat3CvRQYnb5EdcgYFzUyMobXzxlAVh_IAgqyvzCE"

HTTPie JWT Expired

Refresh Token

To get a new access token, you should use the refresh token endpoint /api/token/refresh/ posting the refresh token:

http post http://127.0.0.1:8000/api/token/refresh/ refresh=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoicmVmcmVzaCIsImV4cCI6MTU0NTMwODIyMiwianRpIjoiNzAyOGFlNjc0ZTdjNDZlMDlmMzUwYjg3MjU1NGUxODQiLCJ1c2VyX2lkIjoxfQ.Md8AO3dDrQBvWYWeZsd_A1J39z6b6HEwWIUZ7ilOiPE

HTTPie JWT Refresh Token

The return is a new access token that you should use in the subsequent requests.

The refresh token is valid for the next 24 hours. When it finally expires too, the user will need to perform a full authentication again using their username and password to get a new set of access token + refresh token.


What’s The Point of The Refresh Token?

At first glance the refresh token may look pointless, but in fact it is necessary to make sure the user still have the correct permissions. If your access token have a long expire time, it may take longer to update the information associated with the token. That’s because the authentication check is done by cryptographic means, instead of querying the database and verifying the data. So some information is sort of cached.

There is also a security aspect, in a sense that the refresh token only travel in the POST data. And the access token is sent via HTTP header, which may be logged along the way. So this also give a short window, should your access token be compromised.


Further Reading

This should cover the basics on the backend implementation. It’s worth checking the djangorestframework_simplejwt settings for further customization and to get a better idea of what the library offers.

The implementation on the frontend depends on what framework/library you are using. Some libraries and articles covering popular frontend frameworks like angular/react/vue.js:

The code used in this tutorial is available at github.com/sibtc/drf-jwt-example.

Advanced Form Rendering with Django Crispy Forms [Simple is Better Than Complex]

[Django 2.1.3 / Python 3.6.5 / Bootstrap 4.1.3]

In this tutorial we are going to explore some of the Django Crispy Forms features to handle advanced/custom forms rendering. This blog post started as a discussion in our community forum, so I decided to compile the insights and solutions in a blog post to benefit a wider audience.

Table of Contents


Introduction

Throughout this tutorial we are going to implement the following Bootstrap 4 form using Django APIs:

Bootstrap 4 Form

This was taken from Bootstrap 4 official documentation as an example of how to use form rows.

NOTE!

The examples below refer to a base.html template. Consider the code below:

base.html

<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
  <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">
</head>
<body>
  <div class="container">
    {% block content %}
    {% endblock %}
  </div>
</body>
</html>

Installation

Install it using pip:

pip install django-crispy-forms

Add it to your INSTALLED_APPS and select which styles to use:

settings.py

INSTALLED_APPS = [
    ...

    'crispy_forms',
]

CRISPY_TEMPLATE_PACK = 'bootstrap4'

For detailed instructions about how to install django-crispy-forms, please refer to this tutorial: How to Use Bootstrap 4 Forms With Django


Basic Form Rendering

The Python code required to represent the form above is the following:

from django import forms

STATES = (
    ('', 'Choose...'),
    ('MG', 'Minas Gerais'),
    ('SP', 'Sao Paulo'),
    ('RJ', 'Rio de Janeiro')
)

class AddressForm(forms.Form):
    email = forms.CharField(widget=forms.TextInput(attrs={'placeholder': 'Email'}))
    password = forms.CharField(widget=forms.PasswordInput())
    address_1 = forms.CharField(
        label='Address',
        widget=forms.TextInput(attrs={'placeholder': '1234 Main St'})
    )
    address_2 = forms.CharField(
        widget=forms.TextInput(attrs={'placeholder': 'Apartment, studio, or floor'})
    )
    city = forms.CharField()
    state = forms.ChoiceField(choices=STATES)
    zip_code = forms.CharField(label='Zip')
    check_me_out = forms.BooleanField(required=False)

In this case I’m using a regular Form, but it could also be a ModelForm based on a Django model with similar fields. The state field and the STATES choices could be either a foreign key or anything else. Here I’m just using a simple static example with three Brazilian states.

Template:

{% extends 'base.html' %}

{% block content %}
  <form method="post">
    {% csrf_token %}
    <table>{{ form.as_table }}</table>
    <button type="submit">Sign in</button>
  </form>
{% endblock %}

Rendered HTML:

Simple Django Form

Rendered HTML with validation state:

Simple Django Form Validation State


Basic Crispy Form Rendering

Same form code as in the example before.

Template:

{% extends 'base.html' %}

{% load crispy_forms_tags %}

{% block content %}
  <form method="post">
    {% csrf_token %}
    {{ form|crispy }}
    <button type="submit" class="btn btn-primary">Sign in</button>
  </form>
{% endblock %}

Rendered HTML:

Crispy Django Form

Rendered HTML with validation state:

Crispy Django Form Validation State


Custom Fields Placement with Crispy Forms

Same form code as in the first example.

Template:

{% extends 'base.html' %}

{% load crispy_forms_tags %}

{% block content %}
  <form method="post">
    {% csrf_token %}
    <div class="form-row">
      <div class="form-group col-md-6 mb-0">
        {{ form.email|as_crispy_field }}
      </div>
      <div class="form-group col-md-6 mb-0">
        {{ form.password|as_crispy_field }}
      </div>
    </div>
    {{ form.address_1|as_crispy_field }}
    {{ form.address_2|as_crispy_field }}
    <div class="form-row">
      <div class="form-group col-md-6 mb-0">
        {{ form.city|as_crispy_field }}
      </div>
      <div class="form-group col-md-4 mb-0">
        {{ form.state|as_crispy_field }}
      </div>
      <div class="form-group col-md-2 mb-0">
        {{ form.zip_code|as_crispy_field }}
      </div>
    </div>
    {{ form.check_me_out|as_crispy_field }}
    <button type="submit" class="btn btn-primary">Sign in</button>
  </form>
{% endblock %}

Rendered HTML:

Custom Crispy Django Form

Rendered HTML with validation state:

Custom Crispy Django Form Validation State


Crispy Forms Layout Helpers

We could use the crispy forms layout helpers to achieve the same result as above. The implementation is done inside the form __init__ method:

forms.py

from django import forms
from crispy_forms.helper import FormHelper
from crispy_forms.layout import Layout, Submit, Row, Column

STATES = (
    ('', 'Choose...'),
    ('MG', 'Minas Gerais'),
    ('SP', 'Sao Paulo'),
    ('RJ', 'Rio de Janeiro')
)

class AddressForm(forms.Form):
    email = forms.CharField(widget=forms.TextInput(attrs={'placeholder': 'Email'}))
    password = forms.CharField(widget=forms.PasswordInput())
    address_1 = forms.CharField(
        label='Address',
        widget=forms.TextInput(attrs={'placeholder': '1234 Main St'})
    )
    address_2 = forms.CharField(
        widget=forms.TextInput(attrs={'placeholder': 'Apartment, studio, or floor'})
    )
    city = forms.CharField()
    state = forms.ChoiceField(choices=STATES)
    zip_code = forms.CharField(label='Zip')
    check_me_out = forms.BooleanField(required=False)

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.helper = FormHelper()
        self.helper.layout = Layout(
            Row(
                Column('email', css_class='form-group col-md-6 mb-0'),
                Column('password', css_class='form-group col-md-6 mb-0'),
                css_class='form-row'
            ),
            'address_1',
            'address_2',
            Row(
                Column('city', css_class='form-group col-md-6 mb-0'),
                Column('state', css_class='form-group col-md-4 mb-0'),
                Column('zip_code', css_class='form-group col-md-2 mb-0'),
                css_class='form-row'
            ),
            'check_me_out',
            Submit('submit', 'Sign in')
        )

The template implementation is very minimal:

{% extends 'base.html' %}

{% load crispy_forms_tags %}

{% block content %}
  {% crispy form %}
{% endblock %}

The end result is the same.

Rendered HTML:

Custom Crispy Django Form

Rendered HTML with validation state:

Custom Crispy Django Form Validation State


Custom Crispy Field

You may also customize the field template and easily reuse throughout your application. Let’s say we want to use the custom Bootstrap 4 checkbox:

Bootstrap 4 Custom Checkbox

From the official documentation, the necessary HTML to output the input above:

<div class="custom-control custom-checkbox">
  <input type="checkbox" class="custom-control-input" id="customCheck1">
  <label class="custom-control-label" for="customCheck1">Check this custom checkbox</label>
</div>

Using the crispy forms API, we can create a new template for this custom field in our “templates” folder:

custom_checkbox.html

{% load crispy_forms_field %}

<div class="form-group">
  <div class="custom-control custom-checkbox">
    {% crispy_field field 'class' 'custom-control-input' %}
    <label class="custom-control-label" for="{{ field.id_for_label }}">{{ field.label }}</label>
  </div>
</div>

Now we can create a new crispy field, either in our forms.py module or in a new Python module named fields.py or something.

forms.py

from crispy_forms.layout import Field

class CustomCheckbox(Field):
    template = 'custom_checkbox.html'

We can use it now in our form definition:

forms.py

class CustomFieldForm(AddressForm):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.helper = FormHelper()
        self.helper.layout = Layout(
            Row(
                Column('email', css_class='form-group col-md-6 mb-0'),
                Column('password', css_class='form-group col-md-6 mb-0'),
                css_class='form-row'
            ),
            'address_1',
            'address_2',
            Row(
                Column('city', css_class='form-group col-md-6 mb-0'),
                Column('state', css_class='form-group col-md-4 mb-0'),
                Column('zip_code', css_class='form-group col-md-2 mb-0'),
                css_class='form-row'
            ),
            CustomCheckbox('check_me_out'),  # <-- Here
            Submit('submit', 'Sign in')
        )

(PS: the AddressForm was defined here and is the same as in the previous example.)

The end result:

Bootstrap 4 Custom Checkbox


Conclusions

There is much more Django Crispy Forms can do. Hopefully this tutorial gave you some extra insights on how to use the form helpers and layout classes. As always, the official documentation is the best source of information:

Django Crispy Forms layouts docs

Also, the code used in this tutorial is available on GitHub at github.com/sibtc/advanced-crispy-forms-examples.

How to Implement Token Authentication using Django REST Framework [Simple is Better Than Complex]

In this tutorial you are going to learn how to implement Token-based authentication using Django REST Framework (DRF). The token authentication works by exchanging username and password for a token that will be used in all subsequent requests so to identify the user on the server side.

The specifics of how the authentication is handled on the client side vary a lot depending on the technology/language/framework you are working with. The client could be a mobile application using iOS or Android. It could be a desktop application using Python or C++. It could be a Web application using PHP or Ruby.

But once you understand the overall process, it’s easier to find the necessary resources and documentation for your specific use case.

Token authentication is suitable for client-server applications, where the token is safely stored. You should never expose your token, as it would be (sort of) equivalent of a handing out your username and password.

Table of Contents


Setting Up The REST API Project

So let’s start from the very beginning. Install Django and DRF:

pip install django
pip install djangorestframework

Create a new Django project:

django-admin.py startproject myapi .

Navigate to the myapi folder:

cd myapi

Start a new app. I will call my app core:

django-admin.py startapp core

Here is what your project structure should look like:

myapi/
 |-- core/
 |    |-- migrations/
 |    |-- __init__.py
 |    |-- admin.py
 |    |-- apps.py
 |    |-- models.py
 |    |-- tests.py
 |    +-- views.py
 |-- __init__.py
 |-- settings.py
 |-- urls.py
 +-- wsgi.py
manage.py

Add the core app (you created) and the rest_framework app (you installed) to the INSTALLED_APPS, inside the settings.py module:

myapi/settings.py

INSTALLED_APPS = [
    # Django Apps
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',

    # Third-Party Apps
    'rest_framework',

    # Local Apps (Your project's apps)
    'myapi.core',
]

Return to the project root (the folder where the manage.py script is), and migrate the database:

python manage.py migrate

Let’s create our first API view just to test things out:

myapi/core/views.py

from rest_framework.views import APIView
from rest_framework.response import Response

class HelloView(APIView):
    def get(self, request):
        content = {'message': 'Hello, World!'}
        return Response(content)

Now register a path in the urls.py module:

myapi/urls.py

from django.urls import path
from myapi.core import views

urlpatterns = [
    path('hello/', views.HelloView.as_view(), name='hello'),
]

So now we have an API with just one endpoint /hello/ that we can perform GET requests. We can use the browser to consume this endpoint, just by accessing the URL http://127.0.0.1:8000/hello/:

Hello Endpoint HTML

We can also ask to receive the response as plain JSON data by passing the format parameter in the querystring like http://127.0.0.1:8000/hello/?format=json:

Hello Endpoint JSON

Both methods are fine to try out a DRF API, but sometimes a command line tool is more handy as we can play more easily with the requests headers. You can use cURL, which is widely available on all major Linux/macOS distributions:

curl http://127.0.0.1:8000/hello/

Hello Endpoint cURL

But usually I prefer to use HTTPie, which is a pretty awesome Python command line tool:

http http://127.0.0.1:8000/hello/

Hello Endpoint HTTPie

Now let’s protect this API endpoint so we can implement the token authentication:

myapi/core/views.py

from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated  # <-- Here


class HelloView(APIView):
    permission_classes = (IsAuthenticated,)             # <-- And here

    def get(self, request):
        content = {'message': 'Hello, World!'}
        return Response(content)

Try again to access the API endpoint:

http http://127.0.0.1:8000/hello/

Hello Endpoint HTTPie Forbidden

And now we get an HTTP 403 Forbidden error. Now let’s implement the token authentication so we can access this endpoint.


Implementing the Token Authentication

We need to add two pieces of information in our settings.py module. First include rest_framework.authtoken to your INSTALLED_APPS and include the TokenAuthentication to REST_FRAMEWORK:

myapi/settings.py

INSTALLED_APPS = [
    # Django Apps
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',

    # Third-Party Apps
    'rest_framework',
    'rest_framework.authtoken',  # <-- Here

    # Local Apps (Your project's apps)
    'myapi.core',
]

REST_FRAMEWORK = {
    'DEFAULT_AUTHENTICATION_CLASSES': [
        'rest_framework.authentication.TokenAuthentication',  # <-- And here
    ],
}

Migrate the database to create the table that will store the authentication tokens:

python manage.py migrate

Migrate Auth Token

Now we need a user account. Let’s just create one using the manage.py command line utility:

python manage.py createsuperuser --username vitor --email vitor@example.com

The easiest way to generate a token, just for testing purpose, is using the command line utility again:

python manage.py drf_create_token vitor

drf_create_token

This piece of information, the random string 9054f7aa9305e012b3c2300408c3dfdf390fcddf is what we are going to use next to authenticate.

But now that we have the TokenAuthentication in place, let’s try to make another request to our /hello/ endpoint:

http http://127.0.0.1:8000/hello/

WWW-Authenticate Token

Notice how our API is now providing some extra information to the client on the required authentication method.

So finally, let’s use our token!

http http://127.0.0.1:8000/hello/ 'Authorization: Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf'

REST Token Authentication

And that’s pretty much it. For now on, on all subsequent request you should include the header Authorization: Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf.

The formatting looks weird and usually it is a point of confusion on how to set this header. It will depend on the client and how to set the HTTP request header.

For example, if we were using cURL, the command would be something like this:

curl http://127.0.0.1:8000/hello/ -H 'Authorization: Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf'

Or if it was a Python requests call:

import requests

url = 'http://127.0.0.1:8000/hello/'
headers = {'Authorization': 'Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf'}
r = requests.get(url, headers=headers)

Or if we were using Angular, you could implement an HttpInterceptor and set a header:

import { Injectable } from '@angular/core';
import { HttpRequest, HttpHandler, HttpEvent, HttpInterceptor } from '@angular/common/http';
import { Observable } from 'rxjs';

@Injectable()
export class AuthInterceptor implements HttpInterceptor {
  intercept(request: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> {
    const user = JSON.parse(localStorage.getItem('user'));
    if (user && user.token) {
      request = request.clone({
        setHeaders: {
          Authorization: `Token ${user.accessToken}`
        }
      });
    }
    return next.handle(request);
  }
}

User Requesting a Token

The DRF provide an endpoint for the users to request an authentication token using their username and password.

Include the following route to the urls.py module:

myapi/urls.py

from django.urls import path
from rest_framework.authtoken.views import obtain_auth_token  # <-- Here
from myapi.core import views

urlpatterns = [
    path('hello/', views.HelloView.as_view(), name='hello'),
    path('api-token-auth/', obtain_auth_token, name='api_token_auth'),  # <-- And here
]

So now we have a brand new API endpoint, which is /api-token-auth/. Let’s first inspect it:

http http://127.0.0.1:8000/api-token-auth/

API Token Auth

It doesn’t handle GET requests. Basically it’s just a view to receive a POST request with username and password.

Let’s try again:

http post http://127.0.0.1:8000/api-token-auth/ username=vitor password=123

API Token Auth POST

The response body is the token associated with this particular user. After this point you store this token and apply it to the future requests.

Then, again, the way you are going to make the POST request to the API depends on the language/framework you are using.

If this was an Angular client, you could store the token in the localStorage, if this was a Desktop CLI application you could store in a text file in the user’s home directory in a dot file.


Conclusions

Hopefully this tutorial provided some insights on how the token authentication works. I will try to follow up this tutorial providing some concrete examples of Angular applications, command line applications and Web clients as well.

It is important to note that the default Token implementation has some limitations such as only one token per user, no built-in way to set an expiry date to the token.

You can grab the code used in this tutorial at github.com/sibtc/drf-token-auth-example.

13-11-2020

17:24

30 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

29 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

28 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

27 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

26 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden

25 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

24 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

23 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

22 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

21 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

20 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

19 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

18 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

17 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

16 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

15 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

14 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

13 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

12 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

11 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

Python GUI applicatie consistent backups met fsarchiver [linux blogs franz ulenaers]

Python GUI applicatie consistent backups maken met fsarchiver



Een partitie van het type = "Linux LVM" kan gebruikt worden voor logische volumen maar ook als "snapshot" !
Een snapshot kan een exact kopie zijn van een logische volume dat bevrozen is op een bepaald ogenblik : dit maakt het mogelijk om consistente backups te maken van logische volumen
terwijl de logische volumen in gebruik zijn !





Mijn fysische en logische volumen zijn als volgt aangemaakt :

    fysische volume

      pvcreate /dev/sda1

    fysische volume groep

      vgcreate mydell /dev/sda1

    logische volumen

      lvcreate -L 1G -n boot mydell

      lvcreate -L 100G -n data mydell

      lvcreate -L 50G -n home mydell

      lvcreate -L 50G -n root mydell

      lvcreate -L 1G swap mydell







beginscherm

procedures MyCloud [linux blogs franz ulenaers]

Procedures MyCloud

  • Procedure lftpUlefr01Cloudupload wordt gebruikt om een upload te doen van bestanden en mappen naar MyCloud

  • Procedure lftpUlefr01Cloudmirror wordt gebruikt om wijzigingen terug te halen 


Beide procedures maken gebruik van het programma lftp ( dit is "Sophisticated file transfer program" ) en worden gebruikt om synchronisatie van laptop en desktop toe te laten 


Procedures werden aangepast zodat verborgen bestanden en verborgen mappen ook worden verwerkt ,

alsook werden voor mirror bepaalde meestal onveranderde bestanden en mappen uitgefilterd (--exclude) zodanig dat deze niet opnieuw worden verwerkt

op Cloud blijven ze bestaan als backup maar op de verschillende laptops niet (dit werd gedaan voor oudere mails van 2016 maanden 2016-11 en 2016-12

en voor alle vorige maanden (dit tot en met september) van 2017 !

  • zie bijlagen


python GUI applicatie tune2fs [linux blogs franz ulenaers]

python GUI applicatie tune2fs comando

Created woensdag 18 oktober 2017

geschreven met programmeertaal python met gebruik van Gtk+ 3 

starten in terminal met : sudo python mytune2fs.py

ofwel python source compileren en starten met gecompileerde versie


zie bijlagen :
* pdf
* mytune2fs.py

Python GUI applicatie myarchive.py [linux blogs franz ulenaers]

python GUI applicatie backups maken met fsarchiver

Created vrijdag 13 oktober 2017

GUI applicatie backups maken, achiveerinfo en restore met fsarchiver

zie bijgeleverde bestand : python_GUI_applicatie_backups_maken_met_fsarchiver.pdf


start in terminal mode met : 

* sudo python myarchive.py

* sudo python myarchive2.py

ofwel door gecompileerde versie te maken en de gegeneerde objecten te starten


python myfsck.py [linux blogs franz ulenaers]

python GUI applicatie fsck commando

Created vrijdag 13 oktober 2017

zie bijgeleverd bestand myfsck.py

Deze applicatie kan devices mounten en umounten maar is hoofdzakelijk bedoeld om het fsck comando uit te voeren

Root rechten zijn nodig !

hulp ?

* starten in terminal mode 

* sudo python myfsck.py


Maken dat een bestand niet te wijzigen , niet te hernoemen is niet te deleten is in linux ! [linux blogs franz ulenaers]

Maken dat een bestand niet te wijzigen , niet te hernoemen is niet te deleten is in linux !


bestand .encfs6.xml


hoe : sudo chattr +i /data/Encrypt/.encfs6.xml

je kunt het bestand niet wijzigen, je kunt het bestand niet hernoemen, je kunt het bestand niet deleten zelfs als je root zijt

  • zet attribuut
  • status bekijken
    • lsattr .encfs6.xml
      • ----i--------e-- .encfs6.xml
        • de i betekent immutable
  • om immutable attribuut weg te doen
    • chattr -i .encfs6.xml



Backup laptop [linux blogs franz ulenaers]

laptop heeft een multiboot = windows 7 met encryptie en Linux Mint
backup van mijn laptop , zie http://users.telenet.be/franz.ulenaers/laptopca-new.html

Linken in Linux [linux blogs franz ulenaers]

Op Linux kan men bestanden meervoudige benamingen geven, zo kun je een bestand op verschillende plaatsen in de boomstructuur van de bestanden opslaan , zonder extra plaats op harde schijf in te nemen (+-).

Er zijn twee soorten links :

  1. harde links

  2. symbolische links

Een harde link maakt gebruik van hetzelfde bestandsnummer (inode).

Een harde link geldt niet voor een directory !

Een harde link moet op zelfde bestandssysteem en oorspronkelijk bestand moet bestaan !

Een symbolische link , het bestand krijgt een nieuw bestandsnummer , het bestand waarop verwezen wordt hoeft niet te bestaan.

Een symbolische link gaat ook voor een directory.

bash-shell gebruiker ulefr01

pwd
/home/ulefr01/cgcles/linux
ls linuxcursus.odt -ila
293800 -rw-r--r-- 1 ulefr01 ulefr01 4251348 2005-12-17 21:11 linuxcursus.odt

Het bestand linuxcursus is 4,2M groot, inode nr 293800.

bash-shell gebruiker tom

pwd
/home/tom
ln /home/ulefr01/cgcles/linux/linuxcursus.odt cursuslinux.odt
tom@franz3:~ $ ls cursuslinux.odt -il
293800 -rw-r--r-- 2 ulefr01 ulefr01 4251348 2005-12-17 21:11 cursuslinux.odt
geen extra plaats van 4,2M, zelfde inode nr 293800 !

bash-shell gebruiker root

pwd
/root
root@franz3:~ # ln /home/ulefr01/cgcles/linux/linuxcursus.odt linuxcursus.odt
root@franz3:~ # ls -il linux*
293800 -rw-rw-r-- 3 ulefr01 ulefr01 4251300 2005-12-17 21:31 linuxcursus.odt
geen extra plaats van 4,2M, zelfde inode nr 293800 !

bash-shell gebruiker ulefr01, symbolische link

ln -s cgcles/linux/linuxcursus.odt linuxcursus.odt
ulefr01@franz3:~ $ ls -il linuxcursus.odt
1191741 lrwxrwxrwx 1 ulefr01 ulefr01 28 2005-12-17 21:42 linuxcursus.odt -> cgcles/linux/linuxcursus.odt
slechts 28 bytes

ln -s linuxcursus.odt test.odt
1191898 lrwxrwxrwx 1 ulefr01 ulefr01 15 2005-12-17 22:00 test.odt -> linuxcursus.odt
slechts 15 bytes

rm linuxcursus.odt
ulefr01@franz3:~ $ ls *.odt -il
1193723 -rw-r--r-- 1 ulefr01 ulefr01 27521 2005-11-23 20:11 Backup&restore.odt
1193942 -rw-r--r-- 1 ulefr01 ulefr01 13535 2005-11-26 16:11 doc.odt
1191933 -rw------- 1 ulefr01 ulefr01 6135 2005-12-06 12:00 fru.odt
1193753 -rw-r--r-- 1 ulefr01 ulefr01 19865 2005-11-23 22:44 harddiskdata.odt
1193576 -rw-r--r-- 1 ulefr01 ulefr01 7198 2005-11-26 21:46 ooo-1.odt
1191749 -rw------- 1 ulefr01 ulefr01 22542 2005-12-06 16:16 Regen.odt
1191898 lrwxrwxrwx 1 ulefr01 ulefr01 15 2005-12-17 22:00 test.odt -> linuxcursus.odt
test.odt verwijst naar een bestand dat niet bestaat !

18-02-2020

21:55

Samsung Galaxy Z Flip, S20(+) en S20 Ultra Hands-on [Laatste Artikelen - Webwereld]

Samsung nodigde ons uit op de drie allernieuwste smartphones van dichtbij te bekijken. Daar maakten wij dankbaar gebruik van en wij delen onze bevindingen met je.

02-02-2020

21:29

Hands-on: Synology Virtual Machine Manager [Laatste Artikelen - Webwereld]

Dat je NAS tegenwoordig voor veel meer dan alleen het opslaan van bestanden kan worden gebruikt is inmiddels bekend, maar wist je ook dat je er virtuele machines mee kan beheren? Wij leggen je uit hoe.

23-01-2020

16:42

Wat je moet weten over FIDO-sleutels [Laatste Artikelen - Webwereld]

Dankzij de FIDO2-standaard is het mogelijk om zonder wachtwoord toch veilig in te loggen bij diverse online diensten. Onder meer Microsoft en Google bieden hier al opties voor. Dit jaar volgen er waarschijnlijk meer organisaties die dit aanbieden.

Zo gebruik je je iPhone zonder Apple ID [Laatste Artikelen - Webwereld]

Tegenwoordig moet je voor zo’n beetje alles wat je online wilt doen een account aanmaken, zelfs als je niet van plan bent online te werken of als je gewoon geen zin hebt om je gegevens te delen met de fabrikant. Wij laten je vandaag zien hoe je dat voor elkaar krijgt met je iPhone of iPad.

Groot lek in Internet Explorer wordt al misbruikt in het wild [Laatste Artikelen - Webwereld]

Er is een nieuwe zero-day-kwetsbaarheid ontdekt in Microsoft Internet Explorer. Het nieuwe lek wordt al misbruikt en een beveiligingsupdate is nog niet beschikbaar.

Zo installeer je Chrome-extensies in de nieuwe Edge [Laatste Artikelen - Webwereld]

De nieuwe versie van Edge is gebouwd met code van het Chromium-project, maar in de standaardconfiguratie worden extensies uitsluitend geïnstalleerd via de Microsoft Store. Dat is gelukkig vrij eenvoudig aan te passen.

19-01-2020

12:59

Windows 10-upgrade nog steeds gratis [Laatste Artikelen - Webwereld]

Microsoft gaf gebruikers enkele jaren geleden de mogelijkheid gratis te upgraden van Windows 7 naar Windows 10. Daarbij ging het af en toe zo ver dat zelfs gebruikers die dat niet wilden een upgrade kregen. De aanbieding is al lang en breed voorbij, maar gratis upgraden is nog steeds mogelijk en het is nu makkelijker dan ooit. Wij vertellen je hoe je dat doet.

Chrome, Edge, Firefox: Welke browser is het snelst? [Laatste Artikelen - Webwereld]

Er is veel veranderd op de markt voor pc-browsers. Ongeveer vijf jaar geleden was er nog meer concurrentie en geheel eigen ontwikkeling, nu zijn er nog maar twee engines over: die achter Chrome en die achter Firefox. Met de release van de Blink-gebaseerde Edge van Microsoft deze maand kijken we naar benachmarks en praktijktests.

Cooler Master herontwerpt koelpasta-tubes wegens drugsverdenkingen [Laatste Artikelen - Webwereld]

Cooler Master heeft het uiterlijk van z’n koelpasta-spuiten aangepast omdat het bedrijf het naar eigen zeggen beu is om steeds te moeten uitleggen aan ouders dat de inhoud geen drugs is, maar koelpasta.

06-03-2018

19-09-2017

10:33

Embedded Linux Engineer [Job Openings]

You're eager to work with Linux in an exciting environment. You have a lot of PC equipement experience. Prior experience with embedded Linux or small footprint distributions is considered a plus. Region East/West Flanders

Linux Teacher [Job Openings]

We're looking for someone capable of teaching Linux and/or Solaris professionally. Ideally the candidate has experience with teaching in Linux, possibly other non-Windows OSes as well.

Kernel Developer [Job Openings]

We're looking for someone with kernel device driver developement experience. Preferably, but not necessary with knowledge of AV or TV devices.

C/C++ Developers [Job Openings]

We're searching Linux C/C++ Developers. Region Leuven.

Feeds

FeedRSSLast fetchedNext fetched after
Computable XML 15-02-2025, 22:34 16-02-2025, 01:34
GNOMON XML 15-02-2025, 22:34 16-02-2025, 01:34
http://www.h-online.com/news/atom.xml XML 15-02-2025, 22:34 16-02-2025, 01:34
https://www.heise.de/en XML 15-02-2025, 22:34 16-02-2025, 01:34
Job Openings XML 15-02-2025, 22:34 16-02-2025, 01:34
Laatste Artikelen - Webwereld XML 15-02-2025, 22:34 16-02-2025, 01:34
linux blogs franz ulenaers XML 15-02-2025, 22:34 16-02-2025, 01:34
Linux Journal - The Original Magazine of the Linux Community XML 15-02-2025, 22:34 16-02-2025, 01:34
Linux Today XML 15-02-2025, 22:34 16-02-2025, 01:34
OMG! Ubuntu! XML 15-02-2025, 22:34 16-02-2025, 01:34
Planet Python XML 15-02-2025, 22:34 16-02-2025, 01:34
Press Releases Archives - The Document Foundation Blog XML 15-02-2025, 22:34 16-02-2025, 01:34
Simple is Better Than Complex XML 15-02-2025, 22:34 16-02-2025, 01:34
Slashdot: Linux XML 15-02-2025, 22:34 16-02-2025, 01:34
Tech Drive-in XML 15-02-2025, 22:34 16-02-2025, 01:34
ulefr01 - blog franz ulenaers XML 15-02-2025, 22:34 16-02-2025, 01:34

Laatst gewijzigd: zaterdag 15 februari 2025 21:35
Copyright � 2024 - Franz Ulenaers (email : franz.ulenaers@telenet.be)