Real Python: Python Keywords: An Introduction [Planet Python]
Python keywords are reserved words with specific functions and restrictions in the language. Currently, Python has thirty-five keywords and four soft keywords. These keywords are always available in Python, which means you don’t need to import them. Understanding how to use them correctly is fundamental for building Python programs.
By the end of this tutorial, you’ll understand that:
keyword.kwlist
from the keyword
module.print
and exec
are keywords that have been deprecated and turned into functions in Python 3.In this article, you’ll find a basic introduction to all Python keywords and soft keywords along with other resources that will be helpful for learning more about each keyword.
Get Your Cheat Sheet: Click here to download a free cheat sheet that summarizes the main keywords in Python.
Take the Quiz: Test your knowledge with our interactive “Python Keywords: An Introduction” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python Keywords: An IntroductionIn this quiz, you'll test your understanding of Python keywords and soft keywords. These reserved words have specific functions and restrictions in Python, and understanding how to use them correctly is fundamental for building Python programs.
Python keywords are special reserved words that have specific meanings and purposes and can’t be used for anything but those specific purposes. These keywords are always available—you’ll never have to import them into your code.
Python keywords are different from Python’s built-in functions and types. The built-in functions and types are also always available, but they aren’t as restrictive as the keywords in their usage.
An example of something you can’t do with Python keywords is assign something to them. If you try, then you’ll get a SyntaxError
. You won’t get a SyntaxError
if you try to assign something to a built-in function or type, but it still isn’t a good idea. For a more in-depth explanation of ways keywords can be misused, check out Invalid Syntax in Python: Common Reasons for SyntaxError.
There are thirty-five keywords in Python. Here’s a list of them, each linked to its relevant section in this tutorial:
False |
await |
else |
import |
pass |
None |
break |
except |
in |
raise |
True |
class |
finally |
is |
return |
and |
continue |
for |
lambda |
try |
as |
def |
from |
nonlocal |
while |
assert |
del |
global |
not |
with |
async |
elif |
if |
or |
yield |
Two keywords have additional uses beyond their initial use cases. The else
keyword is also used with loops and with try
and except
in addition to in conditional statements. The as
keyword is most commonly used in import
statements, but also used with the with
keyword.
The list of Python keywords and soft keywords has changed over time. For example, the await
and async
keywords weren’t added until Python 3.7. Also, both print
and exec
were keywords in Python 2.7 but were turned into built-in functions in Python 3 and no longer appear in the keywords list.
As mentioned above, you’ll get an error if you try to assign something to a Python keyword. Soft keywords, on the other hand, aren’t that strict. They syntactically act as keywords only in certain conditions.
This new capability was made possible thanks to the introduction of the PEG parser in Python 3.9, which changed how the interpreter reads the source code.
Leveraging the PEG parser allowed for the introduction of structural pattern matching in Python. In order to use intuitive syntax, the authors picked match
, case
, and _
for the pattern matching statements. Notably, match
and case
are widely used for this purpose in many other programming languages.
To prevent conflicts with existing Python code that already used match
, case
, and _
as variable or function names, Python developers decided to introduce the concept of soft keywords.
Currently, there are four soft keywords in Python:
You can use the links above to jump to the soft keywords you’d like to read about, or you can continue reading for a guided tour.
True
, False
, None
There are three Python keywords that are used as values. These values are singleton values that can be used over and over again and always reference the exact same object. You’ll most likely see and use these values a lot.
There are a few terms used in the sections below that may be new to you. They’re defined here, and you should be aware of their meaning before proceeding:
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Lead Asahi Linux Developer Quits Days After Leaving Kernel Maintainer Role [Slashdot: Linux]
Hector Martin has resigned as the project lead of Asahi Linux, weeks after stepping down from his role as a Linux kernel maintainer for Apple ARM support. His departure from Asahi follows a contentious exchange with Linus Torvalds over development processes and social media advocacy. After quitting kernel maintenance earlier this month, the conflict escalated when Martin suggested that "shaming on social media" might be necessary to effect change. Torvalds sharply rejected this approach, stating that "social media brigading just makes me not want to have anything at all to do with your approach" and suggested that Martin himself might be the problem. In his final resignation announcement from Asahi, Martin wrote: "I no longer have any faith left in the kernel development process or community management approach." The dispute reflects deeper tensions in the Linux kernel community, particularly around the integration of Rust code. It follows the August departure of another key Rust for Linux maintainer, Wedson Almeida Filho from Microsoft. According to Sonatype's research, more than 300,000 open source projects have slowed or halted updates since 2020.
Read more of this story at Slashdot.
Is It Time For a Change In GNOME Leadership? [Slashdot: Linux]
Longtime Slashdot reader BrendaEM writes: Command-line aside, Cinnamon is the most effective keeper of the Linux desktop flame -- by not abandoning desktop and laptop computers. Yes, there are other desktop GUIs, such as MATE, and the lightweight Xfce, which are valuable options when low overhead is important, such as in LinuxCNC. However, among the general public lies a great expanse of office workers who need a full-featured Linux desktop. The programmers who work on GNOME and its family of supporting applications enrich many other desktops do their more than their share. These faithful developers deserve better user-interface leadership. GNOME has tried to steer itself into tablet waters, which is admirable, but GNOME 3.x diminished the desktop experience for both laptop and desktop users. For instance, the moment you design what should be a graphical user interface with words such as "Activities," you ask people to change horses midstream. That is not to say that the command line and GUI cannot coexist -- because they can, as they do in many CAD programs. I remember a time when GNOME ruled the Linux desktop -- and I can remember when GNOME left those users behind. Perhaps in a future, GNOME could return to the Linux desktop and join forces with Cinnamon -- so that we may once again have the year of the Linux desktop.
Read more of this story at Slashdot.
LibreOffice 24.8.1, the first minor release of the recently announced LibreOffice 24.8 family, is available for download [Press Releases Archives - The Document Foundation Blog]
The LibreOffice 24.8 family is optimised for the privacy-conscious office suite user who wants full control over the information they share
Berlin, 12 September 2024 – LibreOffice 24.8.1, the first minor release of the LibreOffice 24.8 family of the free, volunteer-supported office suite for Windows (Intel, AMD and ARM), macOS (Apple and Intel) and Linux, is available at www.libreoffice.org/download. For users who don’t need the latest features and prefer a more tested version, TDF maintains the previous LibreOffice 24.2 family, with several months of back-ported fixes. The current version is LibreOffice 24.2.6.
LibreOffice is the only software for creating documents that contain personal or confidential information that respects the privacy of the user – ensuring that the user is able to decide if and with whom to share the content they create. As such, LibreOffice is the best option for the privacy-conscious office suite user, and offers a feature set comparable to the leading product on the market.
In addition, LibreOffice offers a range of interface options to suit different user habits, from traditional to modern, and makes the most of different screen sizes by optimising the space available on the desktop to put the maximum number of features just a click or two away.
The biggest advantage over competing products is the LibreOffice Technology Engine, the single software platform on which desktop, mobile and cloud versions of LibreOffice – including those from ecosystem companies – are based. This allows LibreOffice to provide a better user experience and to produce identical and fully interoperable documents based on the two available ISO standards: the Open Document Format (ODT, ODS and ODP) and the proprietary Microsoft OOXML (DOCX, XLSX and PPTX). The latter hides a great deal of artificial complexity, which can cause problems for users who are confident that they are using a true open standard.
End users looking for support will be helped by the immediate availability of the LibreOffice 24.8 Getting Started Guide, which can be downloaded from the following link: books.libreoffice.org. In addition, they will be able to get first-level technical support from volunteers on the user mailing lists and the Ask LibreOffice website: ask.libreoffice.org.
A short video highlighting the main new features is available on YouTube and PeerTube peertube.opencloud.lu/w/ibmZUeRgnx9bPXQeYUyXTV.
Please confirm that you want to play a YouTube video. By accepting, you will be accessing content from YouTube, a service provided by an external third party.
If you accept this notice, your choice will be saved and the page will refresh.
LibreOffice for Enterprise
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: www.libreoffice.org/download/libreoffice-in-business/.
Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and improves the LibreOffice technology platform. Products based on LibreOffice Technology are available for all major desktop operating systems (Windows, macOS, Linux and ChromeOS), mobile platforms (Android and iOS) and the cloud.
The Document Foundation has developed a migration protocol to help companies move from proprietary office suites to LibreOffice, based on the provision of an LTS (long-term support) enterprise-optimised version of LibreOffice, plus migration consulting and training provided by certified professionals who offer value-added solutions that are consistent with proprietary offerings. Reference: www.libreoffice.org/get-help/professional-support/.
In fact, LibreOffice’s mature code base, rich feature set, strong support for open standards, excellent compatibility and LTS options from certified partners make it the ideal solution for organisations looking to regain control of their data and break free from vendor lock-in.
LibreOffice 24.8.1 availability
LibreOffice 24.8.1 is available from www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 (no longer supported by Microsoft) and Apple macOS 10.15. Products based on LibreOffice technology for Android and iOS are listed at www.libreoffice.org/download/android-and-ios/.
LibreOffice users, free software advocates and community members can support The Document Foundation by making a donation at www.libreoffice.org/donate.
Ubuntu’s Icon Theme Fixing Its Not-So-Obvious ‘Bug’ [OMG! Ubuntu!]
Ever looked at Ubuntu’s default icon theme Yaru and found yourself thinking: “Eh, some of those icons look too big”? —No, can’t say I had either! But it turns out some of the icons are indeed oversized. The Yaru icon theme in Ubuntu uses 4 different shapes for its app, folder and mimetype (file) icons, with a shape picked based on what works best for the design motif being used. Those shapes are: Of those, the most common icon shape used in Yaru is ‘square’ (with rounded corners, but don’t call it a squircle cos that’s so 2014, y’all). It’s […]
You're reading Ubuntu’s Icon Theme Fixing Its Not-So-Obvious ‘Bug’, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Ubuntu 24.04.2 Delayed, Won’t Be Released This Week [OMG! Ubuntu!]
If you were expecting Ubuntu 24.04.2 LTS to drop tomorrow, I come bearing some bad news: the release has been delayed by a week. Canonical’s Utkarsh Gupta reports that an ‘unfortunate incident’ resulting in some of the newly spun Ubuntu 24.04.2 images (for flavours) being built without the new HWE kernel on board (which is Linux 6.11, for those unaware). Now, including a new kernel version on the ISO is kind of the whole point of the second Ubuntu point release. It has to be there so that the latest long-term support release can boot on and support the latest […]
You're reading Ubuntu 24.04.2 Delayed, Won’t Be Released This Week, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
GNOME’s Website Just Got a Major Redesign [OMG! Ubuntu!]
GNOME rolled out a huge revamp to its official website today, and I have to say: it’s a solid improvement over the old one. The official GNOME website has an important role, serving as both showcase and springboard for those looking to learn more about the desktop environment, the app ecosystem, developer documentation, or how to get involved and support the project. Arranging, presenting, and meeting all of those needs on a single landing page—and doing it in an engaging, encouraging way? Difficult to pull off—but GNOME has. The new design looks flashy and modern. It’s more spacious and vibrant, […]
You're reading GNOME’s Website Just Got a Major Redesign, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Clapper Media Player Adds New Features, Official Windows Build [OMG! Ubuntu!]
A new version of the slick Clapper media player is out with several neat improvements Not newly new, I should say. I hadn’t run a flatpak update in Ubuntu I an age so I only jus noticed an update pending for this nifty little media player. But I figured I’d write about it since it’s been around 10 months since its last major release (save a bug fix release last summer). So what’s new? Well, Clapper 0.8.0 intros a new libpeas-based plugin system in its underlying Clapper library (which other apps can make use of to playback media, as Mastodon client […]
You're reading Clapper Media Player Adds New Features, Official Windows Build, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
KDE Plasma 6.3 Released, This is What’s New [OMG! Ubuntu!]
A new version of the KDE Plasma desktop environment is out and, as you’d expect, the update is packed with new features, UI tweaks, and performance boosts. KDE Plasma 6.3 is the fourth major update in the KDE Plasma 6.x series and it also marks the one-year anniversary of the KDE Plasma 6.0 debut – something KDE notes in its announcement: One year on, with the teething problems a major new release inevitably brings firmly behind us, Plasma’s developers have worked on fine-tuning, squashing bugs and adding features to Plasma 6 — turning it into the best desktop environment for […]
You're reading KDE Plasma 6.3 Released, This is What’s New, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Ghostty Terminal Now Supports Server-Side Decorations on Linux [OMG! Ubuntu!]
A new version of Ghostty emerged this week and in this post I run-through the key changes. For those unfamiliar with it, Ghostty is an open-source terminal emulator written in Zig. It offers a “fast, feature-rich, and native” experience — doesn’t claim to be faster, more featured, or go deeper than other native terminals, just offer a competitive combo of the three. Given it does pretty much everything other terminal emulators do, fans faithful to more established terminal emulators won’t find Ghostty‘s presence spooks ’em into switching. It’s a passion project there to be used (or not) depending on need, taste, […]
You're reading Ghostty Terminal Now Supports Server-Side Decorations on Linux, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Best Free and Open Source Alternatives to Apple AirDrop [Linux Today]
AirDrop is a proprietary wireless ad hoc service. The service transfers files among supported Macintosh computers and iOS devices by means of close-range wireless communication. AirDrop is not available for Linux. We recommend the best free and open source alternatives.
The post Best Free and Open Source Alternatives to Apple AirDrop appeared first on Linux Today.
Beelzebub: Open-source honeypot framework [Linux Today]
Beelzebub is an open-source honeypot framework engineered to create a secure environment for detecting and analyzing cyber threats. It features a low-code design for seamless deployment and leverages AI to emulate the behavior of a high-interaction honeypot.
The post Beelzebub: Open-source honeypot framework appeared first on Linux Today.
How to Install Tiny Tiny RSS Using Docker on PC (Ultimate Guide) [Linux Today]
This article will show you how to install Tiny Tiny RSS on Linux using Docker and then how to add a new RSS feed, add plugins, themes, and more.
The post How to Install Tiny Tiny RSS Using Docker on PC (Ultimate Guide) appeared first on Linux Today.
How to Install Speedtest Tracker to Monitor Your Internet Speed [Linux Today]
Learn how to install Speedtest Tracker with Docker and monitor your internet speed with real-time results.
The post How to Install Speedtest Tracker to Monitor Your Internet Speed appeared first on Linux Today.
Zellij: A Modern Terminal Multiplexer for Linux [Linux Today]
In the world of Linux, terminal multiplexers are essential tools for developers, system administrators, and power users, as they allow you to manage multiple terminal sessions within a single window, making your workflow more efficient and organized.
One of the newest and most exciting terminal multiplexers available today is Zellij, which is an open-source terminal multiplexer designed to simplify and enhance the way you work in the command line.
Unlike traditional multiplexers like tmux or screen, Zellij offers a unique layout system, keybindings that are easy to learn, and a plugin system that allows for customization.
You can find the official repository for Zellij on GitHub, which is actively maintained by a community of developers who are passionate about improving the terminal experience.
The post Zellij: A Modern Terminal Multiplexer for Linux appeared first on Linux Today.
Chezmoi: Manage Your Dotfiles Across Multiple Linux Systems [Linux Today]
Chezmoi is an incredible CLI tool that makes it easier to manage your system and software configuration dotfiles across multiple systems.
The post Chezmoi: Manage Your Dotfiles Across Multiple Linux Systems appeared first on Linux Today.
How to Change Java Version on Ubuntu (CLI and GUI) [Linux Today]
Discover a step-by-step guide to change the default version of Java using the CLI and GUI methods on the Ubuntu system.
The post How to Change Java Version on Ubuntu (CLI and GUI) appeared first on Linux Today.
Microsoft’s WSL May Soon Embrace Arch Linux [Linux Today]
Arch may soon become an officially offered distro on Microsoft’s Windows Subsystem for Linux, expanding its reach to Windows users.
The post Microsoft’s WSL May Soon Embrace Arch Linux appeared first on Linux Today.
15 Best Free and Open Source Console Email Clients [Linux Today]
To provide an insight into the quality of software that is available, we have compiled a list of 15 console email clients. Hopefully, there will be something of interest for anyone who wants to efficiently manage their mailbox from the terminal.
The post 15 Best Free and Open Source Console Email Clients appeared first on Linux Today.
You Can Now Install Ubuntu on WSL Using the New Tar-Based Format [Linux Today]
Starting from WSL version 2.4.8, we can install Ubuntu on WSL from a tar file, without using the Microsoft Store on Windows.
The post You Can Now Install Ubuntu on WSL Using the New Tar-Based Format appeared first on Linux Today.
Django Weblog: DjangoCongress JP 2025 Announcement and Live Streaming! [Planet Python]
DjangoCongress JP 2025, to be held on Saturday, February 22, 2025 at 10 am (Japan Standard Time), will be broadcast live!
It will be streamed on the following YouTube Live channels:
This year there will be talks not only about Django, but also about FastAPI and other asynchronous web topics. There will also be talks on Django core development, Django Software Foundation (DSF) governance, and other topics from around the world. Simultaneous translation will be provided in both English and Japanese.
A public viewing of the event will also be held in Tokyo. A reception will also be held, so please check the following connpass page if you plan to attend.
Registration (connpass page): DjangoCongress JP 2025パブリックビューイング
Eli Bendersky: Decorator JITs - Python as a DSL [Planet Python]
Spend enough time looking at Python programs and packages for machine learning, and you'll notice that the "JIT decorator" pattern is pretty popular. For example, this JAX snippet:
import jax.numpy as jnp
import jax
@jax.jit
def add(a, b):
return jnp.add(a, b)
# Use "add" as a regular Python function
... = add(...)
Or the Triton language for writing GPU kernels directly in Python:
import triton
import triton.language as tl
@triton.jit
def add_kernel(x_ptr,
y_ptr,
output_ptr,
n_elements,
BLOCK_SIZE: tl.constexpr):
pid = tl.program_id(axis=0)
block_start = pid * BLOCK_SIZE
offsets = block_start + tl.arange(0, BLOCK_SIZE)
mask = offsets < n_elements
x = tl.load(x_ptr + offsets, mask=mask)
y = tl.load(y_ptr + offsets, mask=mask)
output = x + y
tl.store(output_ptr + offsets, output, mask=mask)
In both cases, the function decorated with jit doesn't get executed by the Python interpreter in the normal sense. Instead, the code inside is more like a DSL (Domain Specific Language) processed by a special purpose compiler built into the library (JAX or Triton). Another way to think about it is that Python is used as a meta language to describe computations.
In this post I will describe some implementation strategies used by libraries to make this possible.
The goal is to explain how different kinds of jit decorators work by using a simplified, educational example that implements several approaches from scratch. All the approaches featured in this post will be using this flow:
These are the steps that happen when a Python function wrapped with our educational jit decorator is called:
Steps (2) and (3) use llvmlite; I've written about llvmlite before, see this post and also the pykaleidoscope project. For an introduction to JIT compilation, be sure to read this and maybe also the series of posts starting here.
First, let's look at the Expr IR. Here we'll make a big simplification - only supporting functions that define a single expression, e.g.:
def expr2(a, b, c, d):
return (a + d) * (10 - c) + b + d / c
Naturally, this can be easily generalized - after all, LLVM IR can be used to express fully general computations.
Here are the Expr data structures:
class Expr:
pass
@dataclass
class ConstantExpr(Expr):
value: float
@dataclass
class VarExpr(Expr):
name: str
arg_idx: int
class Op(Enum):
ADD = "+"
SUB = "-"
MUL = "*"
DIV = "/"
@dataclass
class BinOpExpr(Expr):
left: Expr
right: Expr
op: Op
To convert an Expr into LLVM IR and JIT-execute it, we'll use this function:
def llvm_jit_evaluate(expr: Expr, *args: float) -> float:
"""Use LLVM JIT to evaluate the given expression with *args.
expr is an instance of Expr. *args are the arguments to the expression, each
a float. The arguments must match the arguments the expression expects.
Returns the result of evaluating the expression.
"""
llvm.initialize()
llvm.initialize_native_target()
llvm.initialize_native_asmprinter()
llvm.initialize_native_asmparser()
cg = _LLVMCodeGenerator()
modref = llvm.parse_assembly(str(cg.codegen(expr, len(args))))
target = llvm.Target.from_default_triple()
target_machine = target.create_target_machine()
with llvm.create_mcjit_compiler(modref, target_machine) as ee:
ee.finalize_object()
cfptr = ee.get_function_address("func")
cfunc = CFUNCTYPE(c_double, *([c_double] * len(args)))(cfptr)
return cfunc(*args)
It uses the _LLVMCodeGenerator class to actually generate LLVM IR from Expr. This process is straightforward and covered extensively in the resources I linked to earlier; take a look at the full code here.
My goal with this architecture is to make things simple, but not too simple. On one hand - there are several simplifications: only single expressions are supported, very limited set of operators, etc. It's very easy to extend this! On the other hand, we could have just trivially evaluated the Expr without resorting to LLVM IR; I do want to show a more complete compilation pipeline, though, to demonstrate that an arbitrary amount of complexity can be hidden behind these simple interfaces.
With these building blocks in hand, we can review the strategies used by jit decorators to convert Python functions into Exprs.
Python comes with powerful code reflection and introspection capabilities out of the box. Here's the astjit decorator:
def astjit(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
if kwargs:
raise ASTJITError("Keyword arguments are not supported")
source = inspect.getsource(func)
tree = ast.parse(source)
emitter = _ExprCodeEmitter()
emitter.visit(tree)
return llvm_jit_evaluate(emitter.return_expr, *args)
return wrapper
This is a standard Python decorator. It takes a function and returns another function that will be used in its place (functools.wraps ensures that function attributes like the name and docstring of the wrapper match the wrapped function).
Here's how it's used:
from astjit import astjit
@astjit
def some_expr(a, b, c):
return b / (a + 2) - c * (b - a)
print(some_expr(2, 16, 3))
After astjit is applied to some_expr, what some_expr holds is the wrapper. When some_expr(2, 16, 3) is called, the wrapper is invoked with *args = [2, 16, 3].
The wrapper obtains the AST of the wrapped function, and then uses _ExprCodeEmitter to convert this AST into an Expr:
class _ExprCodeEmitter(ast.NodeVisitor):
def __init__(self):
self.args = []
self.return_expr = None
self.op_map = {
ast.Add: Op.ADD,
ast.Sub: Op.SUB,
ast.Mult: Op.MUL,
ast.Div: Op.DIV,
}
def visit_FunctionDef(self, node):
self.args = [arg.arg for arg in node.args.args]
if len(node.body) != 1 or not isinstance(node.body[0], ast.Return):
raise ASTJITError("Function must consist of a single return statement")
self.visit(node.body[0])
def visit_Return(self, node):
self.return_expr = self.visit(node.value)
def visit_Name(self, node):
try:
idx = self.args.index(node.id)
except ValueError:
raise ASTJITError(f"Unknown variable {node.id}")
return VarExpr(node.id, idx)
def visit_Constant(self, node):
return ConstantExpr(node.value)
def visit_BinOp(self, node):
left = self.visit(node.left)
right = self.visit(node.right)
try:
op = self.op_map[type(node.op)]
return BinOpExpr(left, right, op)
except KeyError:
raise ASTJITError(f"Unsupported operator {node.op}")
When _ExprCodeEmitter finishes visiting the AST it's given, its return_expr field will contain the Expr representing the function's return value. The wrapper then invokes llvm_jit_evaluate with this Expr.
Note how our decorator interjects into the regular Python execution process. When some_expr is called, instead of the standard Python compilation and execution process (code is compiled into bytecode, which is then executed by the VM), we translate its code to our own representation and emit LLVM from it, and then JIT execute the LLVM IR. While it seems kinda pointless in this artificial example, in reality this means we can execute the function's code in any way we like.
This approach is almost exactly how the Triton language works. The body of a function decorated with @triton.jit gets parsed to a Python AST, which then - through a series of internal IRs - ends up in LLVM IR; this in turn is lowered to PTX by the NVPTX LLVM backend. Then, the code runs on a GPU using a standard CUDA pipeline.
Naturally, the subset of Python that can be compiled down to a GPU is limited; but it's sufficient to run performant kernels, in a language that's much friendlier than CUDA and - more importantly - lives in the same file with the "host" part written in regular Python. For example, if you want testing and debugging, you can run Triton in "interpreter mode" which will just run the same kernels locally on a CPU.
Note that Triton lets us import names from the triton.language package and use them inside kernels; these serve as the intrinsics for the language - special calls the compiler handles directly.
Python is a fairly complicated language with a lot of features. Therefore, if our JIT has to support some large portion of Python semantics, it may make sense to leverage more of Python's own compiler. Concretely, we can have it compile the wrapped function all the way to bytecode, and start our translation from there.
Here's the bytecodejit decorator that does just this [1]:
def bytecodejit(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
if kwargs:
raise BytecodeJITError("Keyword arguments are not supported")
expr = _emit_exprcode(func)
return llvm_jit_evaluate(expr, *args)
return wrapper
def _emit_exprcode(func):
bc = func.__code__
stack = []
for inst in dis.get_instructions(func):
match inst.opname:
case "LOAD_FAST":
idx = inst.arg
stack.append(VarExpr(bc.co_varnames[idx], idx))
case "LOAD_CONST":
stack.append(ConstantExpr(inst.argval))
case "BINARY_OP":
right = stack.pop()
left = stack.pop()
match inst.argrepr:
case "+":
stack.append(BinOpExpr(left, right, Op.ADD))
case "-":
stack.append(BinOpExpr(left, right, Op.SUB))
case "*":
stack.append(BinOpExpr(left, right, Op.MUL))
case "/":
stack.append(BinOpExpr(left, right, Op.DIV))
case _:
raise BytecodeJITError(f"Unsupported operator {inst.argval}")
case "RETURN_VALUE":
if len(stack) != 1:
raise BytecodeJITError("Invalid stack state")
return stack.pop()
case "RESUME" | "CACHE":
# Skip nops
pass
case _:
raise BytecodeJITError(f"Unsupported opcode {inst.opname}")
The Python VM is a stack machine; so we emulate a stack to convert the function's bytecode to Expr IR (a bit like an RPN evaluator). As before, we then use our llvm_jit_evaluate utility function to lower Expr to LLVM IR and JIT execute it.
Using this JIT is as simple as the previous one - just swap astjit for bytecodejit:
from bytecodejit import bytecodejit
@bytecodejit
def some_expr(a, b, c):
return b / (a + 2) - c * (b - a)
print(some_expr(2, 16, 3))
Numba is a compiler for Python itself. The idea is that you can speed up specific functions in your code by slapping a numba.njit decorator on them. What happens next is similar in spirit to our simple bytecodejit, but of course much more complicated because it supports a very large portion of Python semantics.
Numba uses the Python compiler to emit bytecode, just as we did; it then converts it into its own IR, and then to LLVM using llvmlite [2].
By starting with the bytecode, Numba makes its life easier (no need to rewrite the entire Python compiler). On the other hand, it also makes some analyses harder, because by the time we're in bytecode, a lot of semantic information existing in higher-level representations is lost. For example, Numba has to sweat a bit to recover control flow information from the bytecode (by running it through a special interpreter first).
The two approaches we've seen so far are similar in many ways - both rely on Python's introspection capabilities to compile the source code of the JIT-ed function to some extent (one to AST, the other all the way to bytecode), and then work on this lowered representation.
The tracing strategy is very different. It doesn't analyze the source code of the wrapped function at all - instead, it traces its execution by means of specially-boxed arguments, leveraging overloaded operators and functions, and then works on the generated trace.
The code implementing this for our smile demo is surprisingly compact:
def tracejit(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
if kwargs:
raise TraceJITError("Keyword arguments are not supported")
argspec = inspect.getfullargspec(func)
argboxes = []
for i, arg in enumerate(args):
if i >= len(argspec.args):
raise TraceJITError("Too many arguments")
argboxes.append(_Box(VarExpr(argspec.args[i], i)))
out_box = func(*argboxes)
return llvm_jit_evaluate(out_box.expr, *args)
return wrapper
Each runtime argument of the wrapped function is assigned a VarExpr, and that is placed in a _Box, a placeholder class which lets us do operator overloading:
@dataclass
class _Box:
expr: Expr
_Box.__add__ = _Box.__radd__ = _register_binary_op(Op.ADD)
_Box.__sub__ = _register_binary_op(Op.SUB)
_Box.__rsub__ = _register_binary_op(Op.SUB, reverse=True)
_Box.__mul__ = _Box.__rmul__ = _register_binary_op(Op.MUL)
_Box.__truediv__ = _register_binary_op(Op.DIV)
_Box.__rtruediv__ = _register_binary_op(Op.DIV, reverse=True)
The remaining key function is _register_binary_op:
def _register_binary_op(opcode, reverse=False):
"""Registers a binary opcode for Boxes.
If reverse is True, the operation is registered as arg2 <op> arg1,
instead of arg1 <op> arg2.
"""
def _op(arg1, arg2):
if reverse:
arg1, arg2 = arg2, arg1
box1 = arg1 if isinstance(arg1, _Box) else _Box(ConstantExpr(arg1))
box2 = arg2 if isinstance(arg2, _Box) else _Box(ConstantExpr(arg2))
return _Box(BinOpExpr(box1.expr, box2.expr, opcode))
return _op
To understand how this works, consider this trivial example:
@tracejit
def add(a, b):
return a + b
print(add(1, 2))
After the decorated function is defined, add holds the wrapper function defined inside tracejit. When add(1, 2) is called, the wrapper runs:
This might be a little mind-bending at first, because there are two different executions that happen:
This tracing approach has some interesting characteristics. Since we don't have to analyze the source of the wrapped functions but only trace through the execution, we can "magically" support a much richer set of programs, e.g.:
@tracejit
def use_locals(a, b, c):
x = a + 2
y = b - a
z = c * x
return y / x - z
print(use_locals(2, 8, 11))
This just works with our basic tracejit. Since Python variables are placeholders (references) for values, our tracing step is oblivious to them - it follows the flow of values. Another example:
@tracejit
def use_loop(a, b, c):
result = 0
for i in range(1, 11):
result += i
return result + b * c
print(use_loop(10, 2, 3))
This also just works! The created Expr will be a long chain of BinExpr additions of i's runtime values through the loop, added to the BinExpr for b * c.
This last example also leads us to a limitation of the tracing approach; the loop cannot be data-dependent - it cannot depend on the function's arguments, because the tracing step has no concept of runtime values and wouldn't know how many iterations to run through; or at least, it doesn't know this unless we want to perform the tracing run for every runtime execution [4].
The tracing approach is useful in several domains, most notably automatic differentiation (AD). For a slightly deeper taste, check out my radgrad project.
The JAX ML framework uses a tracing approach very similar to the one described here. The first code sample in this post shows the JAX notation. JAX cleverly wraps Numpy with its own version which is traced (similar to our _Box, but JAX calls these boxes "tracers"), letting you write regular-feeling Numpy code that can be JIT optimized and executed on accelerators like GPUs and TPUs via XLA. JAX's tracer builds up an underlying IR (called jaxpr) which can then be emitted to XLA ops and passed to XLA for further lowering and execution.
For a fairly deep overview of how JAX works, I recommend reading the autodidax doc.
As mentioned earlier, JAX has some limitations with things like data-dependent control flow in native Python. This won't work, because there's control flow that depends on a runtime value (count):
import jax
@jax.jit
def sum_datadep(a, b, count):
total = a
for i in range(count):
total += b
return total
print(sum_datadep(10, 3, 3))
When sum_datadep is executed, JAX will throw an exception, saying something like:
This concrete value was not available in Python because it depends on the value of the argument count.
As a remedy, JAX has its own built-in intrinsics from the jax.lax package. Here's the example rewritten in a way that actually works:
import jax
from jax import lax
@jax.jit
def sum_datadep_fori(a, b, count):
def body(i, total):
return total + b
return lax.fori_loop(0, count, body, a)
fori_loop (and many other built-ins in the lax package) is something JAX can trace through, generating a corresponding XLA operation (XLA has support for While loops, to which this lax.fori_loop can be lowered).
The tracing approach has clear benefits for JAX as well; because it only cares about the flow of values, it can handle arbitrarily complicated Python code, as long as the flow of values can be traced. Just like the local variables and data-independent loops shown earlier, but also things like closures. This makes meta-programming and templating easy [5].
The full code for this post is available on GitHub.
[1] | Once again, this is a very simplified example. A more realistic translator would have to support many, many more Python bytecode instructions. |
[2] | In fact, llvmlite itself is a Numba sub-project and is maintained by the Numba team, for which I'm grateful! |
[3] | For a fun exercise, try adding constant folding to the wrapped _op: when both its arguments are constants (not boxes), instead placing each in a _Box(ConstantExpr(...)), it could perform the mathematical operation on them and return a single constant box. This is a common optimization in compilers! |
[4] | In all the JIT approaches showed in this post, the expectation is that compilation happens once, but the compiled function can be executed many times (perhaps in a loop). This means that the compilation step cannot depend on the runtime values of the function's arguments, because it has no access to them. You could say that it does, but that's just for the very first time the function is run (in the tracing approach); it has no way of knowing their values the next times the function will run. JAX has some provisions for cases where a function is invoked with a small set of runtime values and we want to separately JIT each of them. |
[5] | A reader pointed out that TensorFlow's AutoGraph feature combines the AST and tracing approaches. TF's eager mode performs tracing, but it also uses AST analyses to rewrite Python loops and conditions into builtins like tf.cond and tf.while_loop. |
Hugo van Kemenade: Improving licence metadata [Planet Python]
PEP 639 defines a spec on how to document licences used in Python projects.
Instead of using a Trove classifier such as “License :: OSI Approved :: BSD License”, which is imprecise (for example, which BSD licence?), the SPDX licence expression syntax is used.
pypproject.toml
#Change pyproject.toml
as follows.
I usually use Hatchling as a build backend, and support was added in 1.27:
[build-system]
build-backend = "hatchling.build"
requires = [
"hatch-vcs",
- "hatchling",
+ "hatchling>=1.27",
]
Replace the freeform license
field with a valid SPDX license expression, and add
license-files
which points to the licence files in the repo. There’s often only one,
but if you have more than one, list them all:
[project]
...
-license = { text = "MIT" }
+license = "MIT"
+license-files = [ "LICENSE" ]
Optionally delete the deprecated licence classifier:
classifiers = [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
- "License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
For example, see humanize#236 and prettytable#350.
Then make sure to use a PyPI uploader that supports this.
I recommend using Trusted Publishing which I use with pypa/gh-action-pypi-publish to deploy from GitHub Actions. I didn’t need to make any changes here, just make a release as usual.
PyPI shows the new metadata:
pip can also show you the metadata:
❯ pip install prettytable==3.13.0
❯ pip show prettytable
Name: prettytable
Version: 3.13.0
...
License-Expression: BSD-3-Clause
Location: /Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages
Requires: wcwidth
Required-by: norwegianblue, pypistats
A lot of work went into this. Thank you to PEP authors Philippe Ombredanne for creating the first draft in 2019, to C.A.M. Gerlach for the second draft in 2021, and especially to Karolina Surma for getting the third draft finish line and helping with the implementation.
And many projects were updated to support this, thanks to the maintainers and contributors of at least:
Header photo: Amelia Earhart’s 1932 pilot licence in the San Diego Air and Space Museum Archive, with no known copyright restrictions.
Real Python: The Real Python Podcast – Episode #239: Behavior-Driven vs Test-Driven Development & Using Regex in Python [Planet Python]
What is behavior-driven development, and how does it work alongside test-driven development? How do you communicate requirements between teams in an organization? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Daniel Roy Greenfeld: Building a playing card deck [Planet Python]
Today is Valentine's Day. That makes it the perfect day to write a blog post about showing how to not just build a deck of cards, but show off cards from the heart suite.
Bojan Mihelac: Prefixed Parameters for Django querystring tag [Planet Python]
An overview of Django 5.1's new querystring tag and how to add support for prefixed parameters.
Peter Bengtsson: get in JavaScript is the same as property in Python [Planet Python]
Prefix a function, in an object or class, with `get` and then that acts as a function call without brackets. Just like Python's `property` decorator.
EuroPython: EuroPython February 2025 Newsletter [Planet Python]
Hey ya 👋
Hope you&aposre all having a fantastic February. We sure have been busy and got some exciting updates for you as we gear up for EuroPython 2025, which is taking place once again in the beautiful city of Prague. So let&aposs dive right in!
EuroPython 2025 is right around the corner and our programme team is hard at work putting together an amazing lineup. But we need your help to shape the conference! We received over 572 fantastic proposals, and now it’s time for Community Voting! 🎉 If you&aposve attended EuroPython before or submitted a proposal this year, you’re eligible to vote.
📢 More votes = a stronger, more diverse programme! Spread the word and get your EuroPython friends to cast their votes too.
🏃The deadline is Monday next week, so don’t miss your chance!
🗳️ Vote now: https://ep2025.europython.eu/programme/voting
Want to play a key role in building an incredible conference? Join our review team and help select the best talks for EuroPython 2025! Whether you&aposre a Python expert or an enthusiastic community member, your insights matter.
We’d like to also thank the over 100 people who have already signed up to review! For those who haven’t done so yet, please remember to accept your Pretalx link and get your reviews in by Monday 17th February.
You can already start reviewing proposals, and each review takes as little as 5 minutes. We encourage reviewers to go through at least 20-30 proposals, but if you can do more, even better! With almost 600 submissions to pick from, your help ensures we curate a diverse and engaging programme.
If you&aposre passionate about Python and want to contribute, we’d love to have you. Sign up here: forms.gle/4GTJjwZ1nHBGetM18.
🏃The deadline is Monday next week, so don’t delay!
Got questions? Reach out to us at programme@europython.eu
EuroPython isn’t just present at other Python events—we actively support them too! As a community sponsor, we love helping local PyCons grow and thrive. We love giving back to the community and strengthening Python events across Europe! 🐍💙
PyCon + Web in Berlin
The EuroPython team had a fantastic time at PyCon + Web in Berlin, meeting fellow Pythonistas, exchanging ideas, and spreading the word about EuroPython 2025. It was great to connect with speakers, organizers, and attendees.
Ever wondered how long it takes to walk from Berlin to Prague? A huge thank you to our co-organizers, Cheuk, Artur, and Cristián, for answering that in their fantastic lightning talk about EuroPython!
FOSDEM 2025
We had some members of the EuroPython team at FOSDEM 2025, connecting with the open-source community and spreading the Python love! 🎉 We enjoyed meeting fellow enthusiasts, sharing insights about the EuroPython Society, and giving away the first EuroPython 2025 stickers. If you stopped by—thank you and we hope to see you in Prague this July.
The signups for The Speaker Mentorship Programme closed on 22nd January 2025. We’re excited to have matched 43 mentees with 24 mentors from our community. We had an increase in the number of mentees who signed up and that’s amazing! We’re glad to be contributing to the journey of new speakers in the Python community. A massive thank you to our mentors for supporting the mentees and to our mentees; we’re proud of you for taking this step in your journey as a speaker.
26 mentees submitted at least 1 proposal. Out of this number, 13 mentees submitted 1 proposal, 9 mentees submitted 2 proposals, 2 mentees submitted 3 proposals, 1 mentee submitted 4 proposals and lastly, 1 mentee submitted 5 proposals. We wish our mentees the best of luck. We look forward to the acceptance of their proposals.
In a few weeks, we will host an online panel session with 2–3 experienced community members who will share their advice with first-time speakers. At the end of the panel, there will be a Q&A session to answer all the participants’ questions.
You can watch the recording of the previous year’s workshop here:
EuroPython is one of the largest Python conferences in Europe, and it wouldn’t be possible without our sponsors. We are so grateful for the companies who have already expressed interest. If you’re interested in sponsoring EuroPython 2025 as well, please reach out to us at sponsoring@europython.eu.
We asked our past speakers to share their experiences speaking at EuroPython. These videos have been published on YouTube as shorts, and we&aposve compiled them into brief clips for you to watch.
A big thanks goes to Sebastian Witowski, Jan Smitka, Yuliia Barabash, Jodie Burchell, Max Kahan, and Cheuk Ting Ho for sharing their experiences.
Why You Should Submit a Proposal for EuroPython? Part 2
Why You Should Submit a Proposal for EuroPython? Part 3
The EuroPython conference wouldn’t be what it is without the incredible volunteers who make it all happen. 💞 Behind the scenes, there’s also the EuroPython Society—a volunteer-led non-profit that manages the fiscal and legal aspects of running the conference, oversees its organization, and works on a few smaller projects like the grants programme. To keep everyone in the loop and promote transparency, the Board is sharing regular updates on what we’re working on.
The January board report is ready: https://europython-society.org/board-report-for-january-2025/.
That&aposs all for now! Keep an eye on your inbox and our website for more news and announcements. We&aposre counting down the days until we can come together in Prague to celebrate our shared love for Python. 🐍❤️
Cheers,
The EuroPython Team
Kay Hayen: Nuitka Release 2.6 [Planet Python]
This is to inform you about the new stable release of Nuitka. It is the extremely compatible Python compiler, “download now”.
This release has all-around improvements, with a lot effort spent on bug fixes in the memory leak domain, and preparatory actions for scalability improvements.
MSYS2: Path normalization to native Windows format was required
in more places for the MinGW
variant of MSYS2.
The os.path.normpath
function doesn’t normalize to native Win32
paths with MSYS2, instead using forward slashes. This required manual
normalization in additional areas. (Fixed in 2.5.1)
UI: Fix, give a proper error when extension modules asked to include failed to be located. instead of a proper error message. (Fixed in 2.5.1)
Fix, files with illegal module names (containing .
) in their
basename were incorrectly considered as potential sub-modules for
--include-package
. These are now skipped. (Fixed in 2.5.1)
Stubgen: Improved stability by preventing crashes when stubgen encounters code it cannot handle. Exceptions from it are now ignored. (Fixed in 2.5.1)
Stubgen: Addressed a crash that occurred when encountering assignments to non-variables. (Fixed in 2.5.1)
Python 3: Fixed a regression introduced in 2.5 release that could lead to segmentation faults in exception handling for generators. (Fixed in 2.5.2)
Python 3.11+: Corrected an issue where dictionary copies of large split directories could become corrupted. This primarily affected instance dictionaries, which are created as copies until updated, potentially causing problems when adding new keys. (Fixed in 2.5.2)
Python 3.11+: Removed the assumption that module dictionaries
always contain only strings as keys. Some modules, like
Foundation
on macOS, use non-string keys. (Fixed in 2.5.2)
Deployment: Ensured that the --deployment
option correctly
affects the C compilation process. Previously, only individual
disables were applied. (Fixed in 2.5.2)
Compatibility: Fixed a crash that could occur during compilation when unary operations were used within binary operations. (Fixed in 2.5.3)
Onefile: Corrected the handling of
__compiled__.original_argv0
, which could lead to crashes. (Fixed
in 2.5.4)
Compatibility: Resolved a segmentation fault occurring at runtime
when calling tensorflow.function
with only keyword arguments.
(Fixed in 2.5.5)
macOS: Harmless warnings generated for x64 DLLs on arm64 with newer macOS versions are now ignored. (Fixed in 2.5.5)
Python 3.13: Addressed a crash in Nuitka’s dictionary code that occurred when copying dictionaries due to internal changes in Python 3.13. (Fixed in 2.5.6)
macOS: Improved onefile mode signing by applying
--macos-signed-app-name
to the signature of binaries, not just
app bundles. (Fixed in 2.5.6)
Standalone: Corrected an issue where too many paths were added as
extra directories from the Nuitka package configuration. This
primarily affected the win32com
package, which currently relies
on the package-dirs
import hack. (Fixed in 2.5.6)
Python 2: Prevented crashes on macOS when creating onefile bundles with Python 2 by handling negative CRC32 values. This issue may have affected other versions as well. (Fixed in 2.5.6)
Plugins: Restored the functionality of code provided in
pre-import-code
, which was no longer being applied due to a
regression. (Fixed in 2.5.6)
macOS: Suppressed the app bundle mode recommendation when it is already in use. (Fixed in 2.5.6)
macOS: Corrected path normalization when the output directory argument includes “~”.
macOS: GitHub Actions Python is now correctly identified as a Homebrew Python to ensure proper DLL resolution. (Fixed in 2.5.7)
Compatibility: Fixed a reference leak that could occur with values sent to generator objects. Asyncgen and coroutines were not affected. (Fixed in 2.5.7)
Standalone: The --include-package
scan now correctly handles
cases where both a package init file and competing Python files
exist, preventing compile-time conflicts. (Fixed in 2.5.7)
Modules: Resolved an issue where handling string constants in modules created for Python 3.12 could trigger assertions, and modules created with 3.12.7 or newer failed to load on older Python 3.12 versions when compiled with Nuitka 2.5.5-2.5.6. (Fixed in 2.5.7)
Python 3.10+: Corrected the tuple code used when calling certain method descriptors. This issue primarily affected a Python 2 assertion, which was not impacted in practice. (Fixed in 2.5.7)
Python 3.13: Updated resource readers to accept multiple
arguments for importlib.resources.read_text
, and correctly handle
encoding
and errors
as keyword-only arguments.
Scons: The platform encoding is no longer used to decode
ccache
logs. Instead, latin1
is used, as it is sufficient for
matching filenames across log lines and avoids potential encoding
errors. (Fixed in 2.5.7)
Python 3.12+: Requests to statically link libraries for hacl
are now ignored, as these libraries do not exist. (Fixed in 2.5.7)
Compatibility: Fixed a memory leak affecting the results of functions called via specs. This primarily impacted overloaded hard import operations. (Fixed in 2.5.7)
Standalone: When multiple distributions for a package are found,
the one with the most accurate file matching is now selected. This
improves handling of cases where an older version of a package (e.g.,
python-opencv
) is overwritten with a different variant (e.g.,
python-opencv-headless
), ensuring the correct version is used for
Nuitka package configuration and reporting. (Fixed in 2.5.8)
Python 2: Prevented a potential crash during onefile
initialization on Python 2 by passing the directory name directly
from the onefile bootstrap, avoiding the use of os.dirname
which
may not be fully loaded at that point. (Fixed in 2.5.8)
Anaconda: Preserved necessary PATH
environment variables on
Windows for packages that require loading DLLs from those locations.
Only PATH
entries not pointing inside the installation prefix are
removed. (Fixed in 2.5.8)
Anaconda: Corrected the is_conda_package
check to function
properly when distribution names and package names differ. (Fixed in
2.5.8)
Anaconda: Improved package name resolution for Anaconda distributions by checking conda metadata when file metadata is unavailable through the usual methods. (Fixed in 2.5.8)
MSYS2: Normalized the downloaded gcc path to use native Windows slashes, preventing potential compilation failures. (Fixed in 2.5.9)
Python 3.13: Restored static libpython functionality on Linux by adapting to a signature change in an unexposed API. (Fixed in 2.5.9)
Python 3.6+: Prevented asyncgen
from being resurrected when a
finalizer is attached, resolving memory leaks that could occur with
asyncio
in the presence of exceptions. (Fixed in 2.5.10)
UI: Suppressed the gcc download prompt that could appear during
--version
output on Windows systems without MSVC or with an
improperly installed gcc.
Ensured compatibility with monkey patched os.lstat
or os.stat
functions, which are used in some testing scenarios.
Data Composer: Improved the determinism of the JSON statistics output by sorting keys, enabling reliable build comparisons.
Python 3.6+: Fixed a memory leak in asyncgen
with finalizers,
which could lead to significant memory consumption when using
asyncio
and encountering exceptions.
Scons: Optimized empty generators (an optimization result) to avoid generating unused context code, eliminating C compilation warnings.
Python 3.6+: Fixed a reference leak affecting the asend
value
in asyncgen
. While typically None
, this could lead to
observable reference leaks in certain cases.
Python 3.5+: Improved handling of coroutine
and asyncgen
resurrection, preventing memory leaks with asyncio
and
asyncgen
, and ensuring correct execution of finally
code in
coroutines.
Python 3: Corrected the handling of generator
objects
resurrecting during deallocation. While not explicitly demonstrated,
this addresses potential issues similar to those encountered with
coroutines, particularly for old-style coroutines created with the
types.coroutine
decorator.
PGO: Fixed a potential crash during runtime trace collection by ensuring timely initialization of the output mechanism.
Standalone: Added inclusion of metadata for jupyter_client
to
support its own usage of metadata. (Added in 2.5.1)
Standalone: Added support for the llama_cpp
package. (Added
in 2.5.1)
Standalone: Added support for the litellm
package. (Added in
2.5.2)
Standalone: Added support for the lab_lamma
package. (Added
in 2.5.2)
Standalone: Added support for docling
metadata. (Added in
2.5.5)
Standalone: Added support for pypdfium
on Linux. (Added in
2.5.5)
Standalone: Added support for using the debian
package.
(Added in 2.5.5)
Standalone: Added support for the pdfminer
package. (Added in
2.5.5)
Standalone: Included missing dependencies for the
torch._dynamo.polyfills
package. (Added in 2.5.6)
Standalone: Added support for rtree
on Linux. The previous
static configuration only worked on Windows and macOS; this update
detects it from the module code. (Added in 2.5.6)
Standalone: Added missing pywebview
JavaScript data files.
(Added in 2.5.7)
Standalone: Added support for newer versions of the sklearn
package. (Added in 2.5.7)
Standalone: Added support for newer versions of the dask
package. (Added in 2.5.7)
Standalone: Added support for newer versions of the
transformers
package. (Added in 2.5.7)
Windows: Placed numpy
DLLs at the top level for improved
support in the Nuitka VM. (Added in 2.5.7)
Standalone: Allowed excluding browsers when including
playwright
. (Added in 2.5.7)
Standalone: Added support for newer versions of the sqlfluff
package. (Added in 2.5.8)
Standalone: Added support for the opencv
conda package,
disabling unnecessary workarounds for its dependencies. (Added in
2.5.8)
Standalone: Added support for newer versions of the soundfile
package.
Standalone: Added support for newer versions of the coincurve
package.
Standalone: Added support for newer versions of the
apscheduler
package.
macOS: Removed the error and workaround forcing that required bundle mode for PyQt5 on macOS, as standalone mode now appears to function correctly.
Standalone: Added support for seleniumbase
package downloads.
Module: Implemented 2-phase loading for all modules in Python 3.5 and higher. This improves loading modules as sub-packages in Python 3.12+, where the loading context is no longer accessible.
UI: Introduced the app
value for the --mode
parameter.
This creates an app bundle on macOS and a onefile binary on other
platforms, replacing the --macos-create-app-bundle
option. (Added
in 2.5.5)
UI: Added a package
mode, similar to module
, which
automatically includes all sub-modules of a package without requiring
manual specification with --include-package
.
Module: Added an option to completely disable the use of
stubgen
. (Added in 2.5.1)
Homebrew: Added support for tcl9
with the tk-inter
plugin.
Package Resolution: Improved handling of multiple distributions
installed for the same package name. Nuitka now attempts to identify
the most recently installed distribution, enabling proper recognition
of different versions in scenarios like python-opencv
and
python-opencv-headless
.
Python 3.13.1 Compatibility: Addressed an issue where a workaround introduced for Python 3.10.0 broke standalone mode in Python 3.13.1. (Added in 2.5.6)
Plugins: Introduced a new feature for absolute source paths
(typically derived from variables or relative to constants). This
offers greater flexibility compared to the by_code
DLL feature,
which may be removed in the future. (Added in 2.5.6)
Plugins: Added support for when
conditions in variable
sections within Nuitka Package configuration.
macOS: App bundles now automatically switch to the containing
directory when not launched from the command line. This prevents the
current directory from defaulting to /
, which is rarely correct
and can be unexpected for users. (Added in 2.5.6)
Compatibility: Relaxed the restriction on setting the compiled
frame f_trace
. Instead of outright rejection, the deployment flag
--no-deployment-flag=frame-useless-set-trace
can be used to allow
it, although it will be ignored.
Windows: Added the ability to detect extension module entry
points using an inline copy of pefile
. This enables
--list-package-dlls
to verify extension module validity on the
platform. It also opens possibilities for automatic extension module
detection on major operating systems.
Watch: Added support for using conda
packages instead of PyPI
packages.
UI: Introduced --list-package-exe
to complement
--list-package-dlls
for package analysis when creating Nuitka
Package Configuration.
Windows ARM: Removed workarounds that are no longer necessary for compilation. While the lack of dependency analysis might require correction in a hotfix, this configuration should now be supported.
Scalability: Implemented experimental code for more compact code object usage, leading to more scalable C code and constants usage. This is expected to speed up C compilation and code generation in the future once fully validated.
Scons: Added support for C23 embedding of the constants blob. This will be utilized with Clang 19+ and GCC 15+, except on Windows and macOS where other methods are currently employed.
Compilation: Improved performance by avoiding redundant path checks in cases of duplicated package directories. This significantly speeds up certain scenarios where file system access is slow.
Scons: Enhanced detection of static libpython, including for self-compiled, uninstalled Python installations.
Improved no_docstrings
support for the xgboost
package.
(Added in 2.5.7)
Avoided unnecessary usage of numpy
for the PIL
package.
Avoided unnecessary usage of yaml
for the numpy
package.
Excluded tcltest
TCL code when using tk-inter
, as these TCL
files are unused.
Avoided using IPython
from the comm
package.
Avoided using pytest
from the pdbp
package.
UI: Added categories for plugins in the --help
output.
Non-package support plugin options are now shown by default.
Introduced a dedicated --help-plugins
option and highlighted it
in the general --help
output. This allows viewing all plugin
options without needing to enable a specific plugin.
UI: Improved warnings for onefile and OS-specific options. These warnings are now displayed unless the command originates from a Nuitka-Action context, where users typically build for different modes with a single configuration set.
Nuitka-Action: The default mode
is now app
, building an
application bundle on macOS and a onefile binary on other platforms.
UI: The executable path in --version
output now uses the
report path. This avoids exposing the user’s home directory,
encouraging more complete output sharing.
UI: The Python flavor name is now included in the startup compilation message.
UI: Improved handling of missing Windows version information. If only partial version information (e.g., product or file version) is provided, an explicit error is given instead of an assertion error during post-processing.
UI: Corrected an issue where the container argument for
run-inside-nuitka-container
could not be a non-template file.
(Fixed in 2.5.2)
Release: The PyPI upload sdist
creation now uses a virtual
environment. This ensures consistent project name casing, as it is
determined by the setuptools version. While currently using the
deprecated filename format, this change prepares for the new format.
Release: The osc
binary is now used from the virtual
environment to avoid potential issues with a broken system
installation, as currently observed on Ubuntu.
Debugging: Added an experimental option to disable the automatic conversion to short paths on Windows.
UI: Improved handling of external data files that overwrite the original file. Nuitka now prompts the user to provide an output directory to prevent unintended overwrites. (Added in 2.5.6)
UI: Introduced the alias --include-data-files-external
for
the external data files option. This clarifies that the feature is
not specific to onefile mode and encourages its wider use.
UI: Allowed none
as a valid value for the macOS icon option.
This disables the warning about a missing icon when intentionally not
providing one.
UI: Added an error check for icon filenames without suffixes, preventing cases where the file type cannot be inferred.
UI: Corrected the examples for --include-package-data
with
file patterns, which used incorrect delimiters.
Scons: Added a warning about using gcc with LTO when make
is
unavailable, as this combination will not work. This provides a
clearer message than the standard gcc warnings, which can be
difficult for Python users to interpret.
Debugging: Added an option to preserve printing during reference count tests. This can be helpful for debugging by providing additional trace information.
Debugging: Added a small code snippet for module reference leak testing to the Developer Manual.
Temporarily disabled tests that expose regressions in Python 3.13.1 that mean not to follow.
Improved test organization by using more common code for package tests. The scanning for test cases and main files now utilizes shared code.
Added support for testing variations of a test with different extra
flags. This is achieved by exposing a NUITKA_TEST_VARIANT
environment variable.
Improved detection of commercial-only test cases by identifying them through their names rather than hardcoding them in the runner. These tests are now removed from the standard distribution to reduce clutter.
Utilized --mode
options in tests for better control and clarity.
Standalone mode tests now explicitly check for the application of the
mode and error out if it’s missing. Mode options are added to the
project options of each test case instead of requiring global
configuration.
Added a test case to ensure comprehensive coverage of external data file usage in onefile mode. This helps detect regressions that may have gone unnoticed previously.
Increased test coverage for coroutines and async generators,
including checks for inspect.isawaitable
and testing both
function and context objects.
Unified the code used for generating source archives for PyPI uploads, ensuring consistency between production and standard archives.
Harmonized the usage of include <...>
vs include "..."
based
on the origin of the included files, improving code style
consistency.
Removed code duplication in the exception handler generator code by
utilizing the DROP_GENERATOR_EXCEPTION
functions.
Updated Python version checks to reflect current compatibility.
Checks for >=3.4
were changed to >=3
, and outdated references
to Python 3.3 in comments were updated to simply “Python 3”.
Scons: Simplified and streamlined the code for the command
options. An OrderedDict
is now used to ensure more stable build
outputs and prevent unnecessary differences in recorded output.
Improved the executeToolChecked
function by adding an argument to
indicate whether decoding of returned bytes
output to unicode
is desired. This eliminates redundant decoding in many places.
This a major release that it consolidates Nuitka big time.
The scalability work has progressed, even if no immediately visible effects are there yet, the next releases will have them, as this is the main area of improvement these days.
The memory leaks found are very important and very old, this is the
first time that asyncio
should be working perfect with Nuitka, it
was usable before, but compatibility is now much higher.
Also, this release puts out a much nicer help output and handling of
plugins help, which no longer needs tricks to see a plugin option that
is not enabled (yet), during --help
. The user interface is hopefully
more clean due to it.
Giampaolo Rodola: psutil: drop Python 2.7 support [Planet Python]
About dropping Python 2.7 support in psutil, 3 years ago I stated:
Not a chance, for many years to come. [Python 2.7] currently represents 7-10% of total downloads, meaning around 70k / 100k downloads per day.
Only 3 years later, and to my surprise, downloads for Python 2.7 dropped to 0.36%! As such, as of psutil 7.0.0, I finally decided to drop support for Python 2.7!
These are downloads per month:
$ pypinfo --percent psutil pyversion
Served from cache: False
Data processed: 4.65 GiB
Data billed: 4.65 GiB
Estimated cost: $0.03
| python_version | percent | download_count |
| -------------- | ------- | -------------- |
| 3.10 | 23.84% | 26,354,506 |
| 3.8 | 18.87% | 20,862,015 |
| 3.7 | 17.38% | 19,217,960 |
| 3.9 | 17.00% | 18,798,843 |
| 3.11 | 13.63% | 15,066,706 |
| 3.12 | 7.01% | 7,754,751 |
| 3.13 | 1.15% | 1,267,008 |
| 3.6 | 0.73% | 803,189 |
| 2.7 | 0.36% | 402,111 |
| 3.5 | 0.03% | 28,656 |
| Total | | 110,555,745 |
According to pypistats.org Python 2.7 downloads represents the 0.28% of the total, around 15.000 downloads per day.
Maintaining 2.7 support in psutil had become increasingly difficult, but still possible. E.g. I could still run tests by using old PYPI backports. GitHub Actions could still be tweaked to run tests and produce 2.7 wheels on Linux and macOS. Not on Windows though, for which I had to use a separate service (Appveyor). Still, the amount of hacks in psutil source code necessary to support Python 2.7 piled up over the years, and became quite big. Some disadvantages that come to mind:
#if
PY_MAJOR_VERSION <= 3
, etc.).enum
s, which created a difference on how CONSTANTS
were exposed in terms of API.pip
and other (outdated)
deps.psutil-6.1.1-cp27-cp27m-macosx_10_9_x86_64.whl
psutil-6.1.1-cp27-none-win32.whl
psutil-6.1.1-cp27-none-win_amd64.whl
psutil-6.1.1-cp27-cp27m-manylinux2010_i686.whl
psutil-6.1.1-cp27-cp27m-manylinux2010_x86_64.whl
psutil-6.1.1-cp27-cp27mu-manylinux2010_i686.whl
psutil-6.1.1-cp27-cp27mu-manylinux2010_x86_64.whl
The removal was done in
PR-2841, which removed around
1500 lines of code (nice!). It felt liberating. In doing so, in the doc I
still made the promise that the 6.1.* serie will keep supporting Python 2.7
and will receive critical bug-fixes only (no new features). It will be
maintained in a specific python2
branch. I explicitly kept
the
setup.py
script compatible with Python 2.7 in terms of syntax, so that, when the tarball
is fetched from PYPI, it will emit an informative error message on pip install
psutil
. The user trying to install psutil on Python 2.7 will see:
$ pip2 install psutil
As of version 7.0.0 psutil no longer supports Python 2.7.
Latest version supporting Python 2.7 is psutil 6.1.X.
Install it with: "pip2 install psutil==6.1.*".
As the informative message states, users that are still on Python 2.7 can still use psutil with:
pip2 install psutil==6.1.*
Django Weblog: DSF member of the month - Lily Foote [Planet Python]
For February 2025, we welcome Lily Foote (@lilyf) as our DSF member of the month! ⭐
Lily Foote is a contributor to Django core for many years, especially on the ORM. She is currently a member of the Django 6.x Steering Council and she has been a DSF member since March 2021.
You can learn more about Lily by visiting her GitHub profile.
Let’s spend some time getting to know Lily better!
My name is Lily Foote and I’ve been contributing to Django for most of my career. I’ve also recently got into Rust and I’m excited about using Rust in Python projects. When I’m not programming, I love hiking, climbing and dancing (Ceilidh)! I also really enjoying playing board games and role playing games (e.g. Dungeons and Dragons).
I’d taught myself Python in my final year at university by doing Project Euler problems and then decided I wanted to learn how to make a website. Django was the first Python web framework I looked at and it worked really well for me.
I’ve done a small amount with Flask and FastAPI. More than any new features, I think the thing that I’d most like to see is more long-term contributors to spread the work of keeping Django awesome.
The side project I’m most excited about is Django Rusty Templates, which is a re-implementation of Django’s templating language in Rust.
The ORM of course!
Django Conferences, the mentorship program Djangonaut Space and the whole community!
I think being willing to invest time is really important. Checking in with your mentees frequently and being an early reviewer of their work. I think this helps keep their motivation up and allows for small corrections early on.
Start small and as you get more familiar with Django and the process of contributing you can take on bigger issues. Also be patient with reviewers – Django has high standards, but is mostly maintained by volunteers with limited time.
Yes! It’s a huge honour! Since January, we’ve been meeting weekly and it feels like we’ve hardly scratched the surface of what we want to achieve. The biggest thing we’re trying to tackle is how to improve the contribution experience – especially evaluating new feature ideas – without draining everyone’s time and energy.
I added the Greatest and Least expressions in Django 1.9, with the support of one of the core team at the time. After that, I kept showing up (especially at conference sprints) and finding a new thing to tackle.
Thanks for having me on!
Thank you for doing the interview, Lily!
Python Morsels: Newlines and escape sequences in Python [Planet Python]
Python allows us to represent newlines in strings using the \n
"escape sequence" and Python uses line ending normalization when reading and writing with files.
This string contains a newline character:
>>> text = "Hello\nworld"
>>> text
'Hello\nworld'
That's what \n
represents: a newline character.
If we print this string, we'll see that \n
becomes an actual newline:
>>> print(text)
Hello
world
Why does Python represent a newline as \n
?
Every character in a Python …
Streamline Your Logs: Exploring Rsyslog for Effective System Log Management on Ubuntu [Linux Journal - The Original Magazine of the Linux Community]
In the world of system administration, effective log management is crucial for troubleshooting, security monitoring, and ensuring system stability. Logs provide valuable insights into system activities, errors, and security incidents. Ubuntu, like most Linux distributions, relies on a logging mechanism to track system and application events.
One of the most powerful logging systems available on Ubuntu is Rsyslog. It extends the traditional syslog functionality with advanced features such as filtering, forwarding logs over networks, and log rotation. This article provides guide on managing system logs with Rsyslog on Ubuntu, covering installation, configuration, remote logging, troubleshooting, and advanced features.
Rsyslog (Rocket-fast System for Log Processing) is an enhanced syslog daemon that allows for high-performance log processing, filtering, and forwarding. It is designed to handle massive volumes of logs efficiently and provides robust features such as:
Multi-threaded log processing
Log filtering based on various criteria
Support for different log formats (e.g., JSON, CSV)
Secure log transmission via TCP, UDP, and TLS
Log forwarding to remote servers
Writing logs to databases
Rsyslog is the default logging system in Ubuntu 20.04 LTS and later and is commonly used in enterprise environments.
Before installing Rsyslog, check if it is already installed and running with the following command:
systemctl status rsyslog
If the output shows active (running), then Rsyslog is installed. If not, you can install it using:
sudo apt update
sudo apt install rsyslog -y
Once installed, enable and start the Rsyslog service:
sudo systemctl enable rsyslog
sudo systemctl start rsyslog
To verify Rsyslog’s status, run:
systemctl status rsyslog
Rsyslog’s primary configuration files are:
/etc/rsyslog.conf – The main configuration file
/etc/rsyslog.d/ – Directory for additional configuration files
Rsyslog uses a facility, severity, action model:
Linux Networking Protocols: Understanding TCP/IP, UDP, and ICMP [Linux Journal - The Original Magazine of the Linux Community]
In the world of Linux networking, protocols play a crucial role in enabling seamless communication between devices. Whether you're browsing the internet, streaming videos, or troubleshooting network issues, underlying networking protocols such as TCP/IP, UDP, and ICMP are responsible for the smooth transmission of data packets. Understanding these protocols is essential for system administrators, network engineers, and even software developers working with networked applications.
This article provides an exploration of the key Linux networking protocols: TCP (Transmission Control Protocol), UDP (User Datagram Protocol), and ICMP (Internet Control Message Protocol). We will examine their working principles, advantages, differences, and practical use cases in Linux environments.
The TCP/IP model (Transmission Control Protocol/Internet Protocol) serves as the backbone of modern networking, defining how data is transmitted across interconnected networks. It consists of four layers:
Application Layer: Handles high-level protocols like HTTP, FTP, SSH, and DNS.
Transport Layer: Ensures reliable or fast data delivery via TCP or UDP.
Internet Layer: Manages addressing and routing with IP and ICMP.
Network Access Layer: Deals with physical transmission methods such as Ethernet and Wi-Fi.
The TCP/IP model is simpler than the traditional OSI model but still retains the fundamental networking concepts necessary for communication.
TCP is a connection-oriented protocol that ensures data is delivered accurately and in order. It is widely used in scenarios where reliability is crucial, such as web browsing, email, and file transfers.
Key Features of TCP:Reliable Transmission: Uses acknowledgments (ACKs) and retransmissions to ensure data integrity.
Connection-Oriented: Establishes a dedicated connection before data transmission.
Ordered Delivery: Maintains the correct sequence of data packets.
Error Checking: Uses checksums to detect transmission errors.
Connection Establishment – The Three-Way Handshake:
Asahi Linux Lead Developer Hector Martin Resigns From Linux Kernel [Slashdot: Linux]
Asahi lead developer Hector Martin, writing in an email: I no longer have any faith left in the kernel development process or community management approach. Apple/ARM platform development will continue downstream. If I feel like sending some patches upstream in the future myself for whatever subtree I may, or I may not. Anyone who feels like fighting the upstreaming fight themselves is welcome to do so. The Register points out that the action follows this interaction with Linux Torvalds. Hector Martin: If shaming on social media does not work, then tell me what does, because I'm out of ideas. Linus Torvalds: How about you accept the fact that maybe the problem is you. You think you know better. But the current process works. It has problems, but problems are a fact of life. There is no perfect. However, I will say that the social media brigading just makes me not want to have anything at all to do with your approach. Because if we have issues in the kernel development model, then social media sure as hell isn't the solution.
Read more of this story at Slashdot.
ONLYOFFICE 8.3 Released, Now Supports Apple iWork Files [OMG! Ubuntu!]
A new version of ONLYOFFICE Desktop Editors, a free, open-source office suite for Windows, macOS, and Linux, is now available to download. ONLYOFFICE 8.3 brings a bunch of new features and nimble enhancements spread throughout the full suite, which is composed of word processor, spreadsheet, presentation, form, and PDF editing apps. Such as? Well, the headline feature is the ability to open and work with Apple iWork documents (.pages, .numbers, .key) and Hancom Office files (.hwp, .hwpx) . Opening these documents will convert them to OOXML to support editing. It’s not possible to edit the native files themselves, nor export/save edits back […]
You're reading ONLYOFFICE 8.3 Released, Now Supports Apple iWork Files, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
How to Disable ‘App is Ready’ Notifications in Ubuntu [OMG! Ubuntu!]
Finding yourself annoyed at those ‘window is ready’ notifications which pop-up when you open some apps in GNOME Shell on Ubuntu? If so, you can disable them by installing a GNOME Shell extension. Now, notifications are helpful—heck, vital when they inform, alert, or indicate that something requires our immediate attention or actioning. But “app is ready” notifications? I don’t find them anything other than obvious. I’m not amnesic; I know the app is ready – I just opened it! They aren’t predictable either. Some apps show them, others don’t. It depends on the app’s metadata, how fast app initialisation is (you’ll see them more […]
You're reading How to Disable ‘App is Ready’ Notifications in Ubuntu, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
LibreOffice 25.2 Released, This is What’s New [OMG! Ubuntu!]
LibreOffice 25.2 has been released, this year’s first major update to the leading open-source office software for Windows, macOS, and Linux. As you’d expect, the update delivers a sizeable set of changes spread throughout the productivity suite, including notable UI changes, accessibility improvements, and more important interoperability buffs to support cross-suite workflows. It’s important to remember that open-source software like LibreOffice doesn’t appear out of thin air; it’s made by humans, many unpaid, others paid to work on specific parts only. We all have personal wish-lists of features and changes we want our favourite open-source apps to add, but we […]
You're reading LibreOffice 25.2 Released, This is What’s New, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Installing Ubuntu on WSL in Windows 11 is Now Easier [OMG! Ubuntu!]
Windows Subsystem for Linux (WSL) user? If so, you will be pleased to hear that Ubuntu is now available in Microsoft’s new tar-based distro format — no need to use the sluggish Microsoft Store. Canonical announced the news today, noting that “the new tar-based WSL distro format allows developers and system administrators to distribute, install, and manage Ubuntu WSL instances from tar files without relying on the Microsoft Store.” In not relying on the Microsoft Store for distribution, it’s less hassle for enterprises to roll out (and customise) Ubuntu on WSL at scale as images packaged in using the new […]
You're reading Installing Ubuntu on WSL in Windows 11 is Now Easier, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Firefox 135 Brings New Tab Page Tweaks, AI Chatbot Access + More [OMG! Ubuntu!]
Right on schedule, a new update to the Mozilla Firefox web browser is available for download. Last month’s Firefox 134 release saw the New Tab page layout refreshed for users in the United States, let Linux go hands-on with touch-hold gestures, seeded Ecosia search engine, and fine-tuned the performance of the built-in pop-up blocker. Firefox 135, as is probably intuit, brings an equally sizeable set of changes to the fore including a wider rollout of its new New Tab page layout to all locales where Stories are available: It’s not a massive makeover, granted. But the new layout adjusts the […]
You're reading Firefox 135 Brings New Tab Page Tweaks, AI Chatbot Access + More, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
How to Fix Spotify ‘No PubKey’ Error on Ubuntu [OMG! Ubuntu!]
Do you use the official Spotify DEB on Ubuntu (or an Ubuntu-based Linux distribution like Linux Mint)? If so, you’ll be used to receiving updates to the Spotify Linux client direct from the official Spotify APT repo, right alongside all your other DEB-based software. Thing is: if you haven’t checked for updates from the command line recently you might not be aware the that security key used to ‘sign’ packages from the Spotify APT repo stopped working at the end of last year. Annoying, but not catastrophic as it—thankfully—doesn’t stop the Spotify Linux app from working just pollutes terminal output […]
You're reading How to Fix Spotify ‘No PubKey’ Error on Ubuntu, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Linux Icon Pack Papirus Gets First Update in 8 Months [OMG! Ubuntu!]
Fans of the Papirus icon theme for Linux desktops will be happy hear a new version is now available to download. Paprius‘s first update in 2025 improves support for KDE Plasma 6 by adding Konversation, KTorrent and RedShift tray icons, KDE and Plasma logo glyphs for use in ‘start menu’ analogues, as well as an assortment of symbolic icons. Retro gaming fans will appreciate an expansion in mime type support in this update. Papirus now includes file icons for ROMs used for emulating ZX Spectrum, SEGA Dreamcast, SEGA Saturn, MSX, and Neo Geo Pocket consoles; and Papirus now uses different […]
You're reading Linux Icon Pack Papirus Gets First Update in 8 Months, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
GNOME Introduces New UI & Monospace Adwaita Fonts [OMG! Ubuntu!]
GNOME has announced a change to its default UI and monospace fonts ahead of the upcoming GNOME 48 release — a typographic turnabout that won’t impact Ubuntu users directly, though. Should you feel a sense of deja vu here it’s because GNOME trialled a font switch last year, during development of GNOME 47. Back then, it replaced its home-grown Cantarell font with the popular open-source sans Inter font (trivia: used by Zorin OS). The change was reverted prior to the GNOME 47 due to various UI quirks, coverage issues, and compatibility (thus underlying the importance of testing things out prior […]
You're reading GNOME Introduces New UI & Monospace Adwaita Fonts, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Try Mozilla’s New AI Detector Add-On for Firefox [OMG! Ubuntu!]
Want to find out if the text you’re reading online was written by an real human or spat out by a large language model (LLM) trying to sound like one? Mozilla’s Fakespot Deepfake Detector Firefox add-on may can help give you an indication. Similar to online AI detector tools, the add-on can analyse text (of 32 words or more) to identify patterns, traits, and tells common in AI generated or manipulated text. It uses Mozilla’s proprietary ApolloDFT engine and a set of open-source detection models. But unlike some tools, Mozilla’s Fakespot Deepfake Detector browser extension is free to use, does […]
You're reading Try Mozilla’s New AI Detector Add-On for Firefox, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
High Tide is a Promising New Linux TIDAL Client [OMG! Ubuntu!]
Linux users hunting for a native client to stream music from TIDAL will want to keep an eye on a promising new open-source app called High Tide. High Tide is an unofficial but native Linux client for the TIDAL music streaming service. It’s written in Python, uses GTK4/libadwaita UI, and leverages official TIDAL APIs for playback. TIDAL, often positioned as the ‘pro-artist music streaming platform’, isn’t as popular as industry titan Spotify (likely because it doesn’t offer a ‘free’ ad-supported tier) but is nonetheless a solid rival to it in terms of features and catalogue breadth. Windows, macOS, Android and […]
You're reading High Tide is a Promising New Linux TIDAL Client, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Thunderbird Email Client Moving to Monthly Feature Drops [OMG! Ubuntu!]
The Thunderbird email client is making its monthly ‘release channel’ builds the default download starting in March. “We’re excited to announce that starting with the 135.0 release in March 2025, the Thunderbird Release channel will be the default download,” Corey Bryant, manager of Thunderbird Release Operations, shares in an update on the project’s discussion hub. Right now, users who visit the Thunderbird website and hit the giant download get the latest Extended Support Release (ESR) build by default. It gets one major feature update a year plus smaller bug fix and security updates issued in-between. The version of Thunderbird Ubuntu […]
You're reading Thunderbird Email Client Moving to Monthly Feature Drops, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Confirmed: Ubuntu Dev Discussions Moving to Matrix [OMG! Ubuntu!]
Ubuntu’s key developers have agreed to switch to Matrix as the primary platform for real-time development communications involving the distro. From March, Matrix will replace IRC as the place where critical Ubuntu development conversations, requests, meetings, and other vital chatter must take place. Developers asked to ensure they have a presence on the platform so they are reachable. Only the current #ubuntu-devel and #ubuntu-release Libera IRC channels are moving to Matrix, but other Ubuntu development-related channels can choose to move –officially, given some projects were using Matrix over IRC already. As a result, any major requests to/of the key Ubuntu […]
You're reading Confirmed: Ubuntu Dev Discussions Moving to Matrix, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
EuroPython Society: Board Report for January 2025 [Planet Python]
The top priority for the board in January was finishing the hiring of our event manager. We’re super excited to introduce Anežka Müller! Anežka is a freelance event manager and a longtime member of the Czech Python community. She’s a member of the Pyvec board, co-organizes PyLadies courses, PyCon CZ, Brno Pyvo, and Brno Python Pizza. She’ll be working closely with the board and OPS team, mainly managing communication with service providers. Welcome onboard!
Our second priority was onboarding teams. We’re happy that we already have the Programme team in place—they started early and launched the Call for Proposals at the beginning of January. We’ve onboarded a few more teams and are in the process of bringing in the rest.
Our third priority was improving our grant programme in order to support more events with our limited budget and to make it more clear and transparent. We went through past data, came up with a new proposal, discussed it, voted on it, and have already published it on our blog.
Python Morsels: Avoid over-commenting in Python [Planet Python]
When do you need a comment in Python and when should you consider an alternative to commenting?
Table of contents
Here is a comment I would not write in my code:
def first_or_none(iterable):
# Return the first item in given iterable (or None if empty).
for item in iterable:
return item
return None
That comment seems to describe what this code does... so why would I not write it?
I do like that comment, but I would prefer to write it as a docstring instead:
def first_or_none(iterable):
"""Return the first item in given iterable (or None if empty)."""
for item in iterable:
return item
return None
Documentation strings are for conveying the purpose of function, class, or module, typically at a high level.
Unlike comments, they can be read by Python's built-in help
function:
>>> help(first_or_none)
Help on function first_or_none in module __main__:
first_or_none(iterable)
Return the first item in given iterable (or None if empty).
Docstrings are also read by other documentation-oriented tools, like Sphinx.
Here's a potentially helpful comment:
EuroPython Society: Changes in the Grants Programme for 2025 [Planet Python]
TL;DR:
Background:
The EPS introduced a Grant Programme in 2017. Since then, we have granted almost EUR 350k through the programme, partly via EuroPython Finaid and by directly supporting other Python events and projects across Europe. In the last two years, the Grant Programme has grown to EUR 100k per year, with even more requests coming in.
With this growth come new challenges in how to distribute funds fairly so that more events can benefit. Looking at data from the past two years, we’ve often been close to or over our budget. The guidelines haven’t been updated in a while. As grant requests become more complex, we’d like to simplify and clarify the process, and better explain it on our website.
We would also like to acknowledge that EuroPython, when traveling around Europe, has an additional impact on the host country, and we’d like to set aside part of the budget for the local community.
The Grant Programme is also a primary funding source for EuroPython Finaid. To that end, we aim to allocate 30% of the total Grant Programme budget to Finaid, an increase from the previous 25%.
Changes:
Using 2024 data, and the budget available for Community Grants (60% of total), we’ve simulated different budget caps and found a sweet spot at 6000EUR, where we are able to support all the requests with most of the grants being below that limit. For 2025 we expect to receive a similar or bigger number of requests.
2024 | 6k | 5k | 4k | 3.5 | 3 | |
Grant #1 | € 4,000.00 | € 4,000.00 | € 4,000.00 | € 4,000.00 | € 3,500.00 | € 3,000.00 |
Grant #2 | € 8,000.00 | € 6,000.00 | € 5,000.00 | € 4,000.00 | € 3,500.00 | € 3,000.00 |
Grant #3 | € 4,000.00 | € 4,000.00 | € 4,000.00 | € 4,000.00 | € 3,500.00 | € 3,000.00 |
Grant #4 | € 5,000.00 | € 5,000.00 | € 5,000.00 | € 4,000.00 | € 3,500.00 | € 3,000.00 |
Grant #5 | € 10,000.00 | € 6,000.00 | € 5,000.00 | € 4,000.00 | € 3,500.00 | € 3,000.00 |
Grant #6 | € 4,000.00 | € 4,000.00 | € 4,000.00 | € 4,000.00 | € 3,500.00 | € 3,000.00 |
Grant #7 | € 1,000.00 | € 1,000.00 | € 1,000.00 | € 1,000.00 | € 1,000.00 | € 1,000.00 |
Grant #8 | € 5,000.00 | € 5,000.00 | € 5,000.00 | € 4,000.00 | € 3,500.00 | € 3,000.00 |
Grant #9 | € 6,000.00 | € 6,000.00 | € 5,000.00 | € 4,000.00 | € 3,500.00 | € 3,000.00 |
Grant #10 | € 2,900.00 | € 2,900.00 | € 2,900.00 | € 2,900.00 | € 2,900.00 | € 2,900.00 |
Grant #11 | € 2,000.00 | € 2,000.00 | € 2,000.00 | € 2,000.00 | € 2,000.00 | € 2,000.00 |
Grant #12 | € 3,000.00 | € 3,000.00 | € 3,000.00 | € 3,000.00 | € 3,000.00 | € 3,000.00 |
Grant #13 | € 450.00 | € 450.00 | € 450.00 | € 450.00 | € 450.00 | € 450.00 |
Grant #14 | € 3,000.00 | € 3,000.00 | € 3,000.00 | € 3,000.00 | € 3,000.00 | € 3,000.00 |
Grant #15 | € 1,000.00 | € 1,000.00 | € 1,000.00 | € 1,000.00 | € 1,000.00 | € 1,000.00 |
Grant #16 | € 2,000.00 | € 2,000.00 | € 2,000.00 | € 2,000.00 | € 2,000.00 | € 2,000.00 |
Grant #17 | € 3,500.00 | € 3,500.00 | € 3,500.00 | € 3,500.00 | € 3,500.00 | € 3,000.00 |
Grant #18 | € 1,500.00 | € 1,500.00 | € 1,500.00 | € 1,500.00 | € 1,500.00 | € 1,500.00 |
SUM | € 66,350.00 | € 60,350.00 | € 57,350.00 | € 52,350.00 | € 48,350.00 | € 43,850.00 |
We are introducing a special 10% pool of money to be used on projects in the host country (in 2025 that’s again Czech Republic). This pool is set aside at the beginning of the year, with one caveat that we would like to deploy it in the first half of the year. Whatever is left unused goes back to the Community Pool to be used in second half of the year.
Expected outcome:
Real Python: Quiz: Python Keywords: An Introduction [Planet Python]
In this quiz, you’ll test your understanding of Python Keywords.
Python keywords are reserved words with specific functions and restrictions in the language. These keywords are always available in Python, which means you don’t need to import them. Understanding how to use them correctly is fundamental for building Python programs.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Zato Blog: Modern REST API Tutorial in Python [Planet Python]
Great APIs don't win theoretical arguments - they just prefer to work reliably and to make developers' lives easier.
Here's a tutorial on what building production APIs is really about: creating interfaces that are practical in usage, while keeping your systems maintainable for years to come.
Sound intriguing? Read the modern REST API tutorial in Python here.
➤ Python API integration tutorials
➤ What is a Network Packet Broker? How to automate networks in Python?
➤ What is an integration platform?
➤ Python Integration platform as a Service (iPaaS)
➤ What is an Enterprise Service Bus (ESB)? What is SOA?
➤ Open-source iPaaS in Python
Kushal Das: pass using stateless OpenPGP command line interface [Planet Python]
Yesterday I wrote about how
I am using a different tool for git
signing and verification. Next, I
replaced my pass
usage. I have a small
patch to use
stateless OpenPGP command line interface (SOP). It is an implementation
agonostic standard for handling OpenPGP messages. You can read the whole SPEC
here.
cargo install rsop rsop-oct
And copied the bash script from my repository to the path somewhere.
The rsoct
binary from rsop-oct
follows the same SOP standard but uses the
card to signing/decryption. I stored my public key in
~/.password-store/.gpg-key
file, which is in turn used for encryption.
Here nothing changed related my daily pass usage, except the number of time I am typing my PIN :)
PyCoder’s Weekly: Issue #668: NumPy, Compiling Python 1.0, BytesIO, and More (Feb. 11, 2025) [Planet Python]
#668 – FEBRUARY 11, 2025
View in Browser »
In this video course, you’ll learn how to use NumPy by exploring several interesting examples. You’ll read data from a file into an array and analyze structured arrays to perform a reconciliation. You’ll also learn how to quickly chart an analysis & turn a custom function into a vectorized function.
REAL PYTHON course
As part of the celebration of 31 years of Python, Bite Code compiles the original Python 1.0 and plays around with it.
BITE CODE!
Postman AI Agent Builder is a suite of solutions that accelerates agent development. With centralized access to the latest LLMs and APIs from over 18,000 companies, plus no-code workflows, you can quickly connect critical tools and build multi-step agents — all without writing a single line of code →
POSTMAN sponsor
BytesIO
If you want to save memory when reading from a BytesIO
object, getvalue()
is surprisingly a good choice.
ITAMAR TURNER-TRAURING
This tutorial will help you master Python string splitting. You’ll learn to use .split()
, .splitlines()
, and re.split()
to effectively handle whitespace, custom delimiters, and multiline text, which will level up your data parsing skills.
REAL PYTHON
“Ever had a Python function behave strangely, remembering values between calls when it shouldn’t? You’re not alone! This is one of Python’s sneakiest pitfalls—mutable default parameters.”
CRAIG RICHARDS • Shared by Bob
Python developers use Posit Package Manager to mirror public & internally developed repos within their firewalls. Get reporting on known vulnerabilities to proactively address potential threats. High-security environments can even run air-gapped.
POSIT sponsor
There are several Just In Time compilation tools out there that allow you to decorate a function to indicate you want it compiled. This article shows you how that works.
ELI BENDERSKY
Django 5.2 contains a new helper on the email class to make it easier to write unit-tests validating that your email contains the content you expect it to contain.
MEDIUM.COM/AMBIENT-INNOVATION • Shared by Ronny Vedrilla
Django 5.0 added the concept of field groups which make it easier to customize the layout of Django forms. This article covers what groups are and how to use them.
VALENTINO GAGLIARDI
The author was recently invited with other senior devs to give a lightning talk on their personal development philosophy. This post captures those thoughts.
QNTM
This Things-I’ve-Learned post talks about how you can suppress the KeyboardInterrupt
expression so your program doesn’t exit with a traceback.
RODRIGO GIRÃO SERRÃO
This PEP proposes a Python Packaging Council with broad authority over packaging standards, tools, and implementations.
PYTHON.ORG
“Definitions for colloquial Python terminology (effectively an unofficial version of the Python glossary).”
TREY HUNNER
February 12, 2025
REALPYTHON.COM
February 14, 2025
MEETUP.COM
February 15 to February 17, 2025
BARCAMPS.EU
February 19, 2025
MEETUP.COM
February 20, 2025
MEETUP.COM
February 22 to February 23, 2025
DJANGOCONGRESS.JP
February 22 to February 24, 2025
PYCONFHYD.ORG
Happy Pythoning!
This was PyCoder’s Weekly Issue #668.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Python Insider: Python 3.14.0 alpha 5 is out [Planet Python]
Here comes the antepenultimate alpha.
https://www.python.org/downloads/release/python-3140a5/
This is an early developer preview of Python 3.14
Python 3.14 is still in development. This release, 3.14.0a5, is the fifth of seven planned alpha releases.
Alpha releases are intended to make it easier to test the current state of new features and bug fixes and to test the release process.
During the alpha phase, features may be added up until the start of the beta phase (2025-05-06) and, if necessary, may be modified or deleted up until the release candidate phase (2025-07-22). Please keep in mind that this is a preview release and its use is not recommended for production environments.
Many new features for Python 3.14 are still being planned and written. Among the new major new features and changes so far:
The next pre-release of Python 3.14 will be the penultimate alpha, 3.14.0a6, currently scheduled for 2025-03-14.
2025-01-29 marked the start of a new lunar year, the Year of the Snake 🐍 (and the Year of Python?).
For centuries, π was often approximated as 3 in China. Some time between the years 1 and 5 CE, astronomer, librarian, mathematician and politician Liu Xin (劉歆) calculated π as 3.154.
Around 130 CE, mathematician, astronomer, and geographer Zhang Heng (張衡, 78–139) compared the celestial circle with the diameter of the earth as 736:232 to get 3.1724. He also came up with a formula for the ratio between a cube and inscribed sphere as 8:5, implying the ratio of a square’s area to an inscribed circle is √8:√5. From this, he calculated π as √10 (~3.162).
Third century mathematician Liu Hui (刘徽) came up with an algorithm for calculating π iteratively: calculate the area of a polygon inscribed in a circle, then as the number of sides of the polygon is increased, the area becomes closer to that of the circle, from which you can approximate π.
This algorithm is similar to the method used by Archimedes in the 3rd century BCE and Ludolph van Ceulen in the 16th century CE (see 3.14.0a2 release notes), but Archimedes only went up to a 96-sided polygon (96-gon). Liu Hui went up to a 192-gon to approximate π as 157/50 (3.14) and later a 3072-gon for 3.14159.
Liu Hu wrote a commentary on the book The Nine Chapters on the Mathematical Art which included his π approximations.
In the fifth century, astronomer, inventor, mathematician, politician, and writer Zu Chongzhi (祖沖之, 429–500) used Liu Hui’s algorithm to inscribe a 12,288-gon to compute π between 3.1415926 and 3.1415927, correct to seven decimal places. This was more accurate than Hellenistic calculations and wouldn’t be improved upon for 900 years.
Happy Year of the Snake!
Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organisation contributions to the Python Software Foundation.
Regards from a remarkably snowless Helsinki,
Your release team,
Hugo van Kemenade
Ned Deily
Steve Dower
Łukasz Langa
Real Python: Building a Python Command-Line To-Do App With Typer [Planet Python]
Building an application to manage your to-do list can be an interesting project when you’re learning a new programming language or trying to take your skills to the next level. In this video course, you’ll build a functional to-do application for the command line using Python and Typer, which is a relatively young library for creating powerful command-line interface (CLI) applications in almost no time.
With a project like this, you’ll apply a wide set of core programming skills while building a real-world application with real features and requirements.
In this video course, you’ll learn how to:
CliRunner
and pytest[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Kushal Das: Using openpgp-card-tool-git with git [Planet Python]
One of the power of Unix systems comes from the various small tools and how
they work together. One such new tool I am using for some time is for git
signing
& verification
using OpenPGP and my Yubikey for the actual signing
operation via
openpgp-card-tool-git. I
replaced the standard gpg
for this usecase with the oct-git
command from this
project.
cargo install openpgp-card-tool-git
Then you will have to configuration your (in my case the global configuration) git configuration.
git config --global gpg.program <path to oct-git>
I am assuming that you already had it configured before for signing, otherwise you have to run the following two commands too.
git config --global commit.gpgsign true
git config --global tag.gpgsign true
Before you start using it, you want to save the pin in your system keyring.
Use the following command.
oct-git --store-card-pin
That is it, now your git commit
will sign the commits using oct-git
tool.
In the next blog post I will show how to use the other tools from the author for various different OpenPGP oeprations.
Seth Michael Larson: Building software for connection (#2: Consensus) [Planet Python]
This is the second article in a series about “software for connection”.
In the previous article we concluded that a persistent always-on internet connection isn't required for software to elicit feelings of connection between humans.
Building on this conclusion: let's explore how Animal Crossing software was able to intercommunicate without requiring a centralized server and infrastructure and the trade-offs for these design decisions.
Animal Crossing has over 1,000 unique items that need to be collected for a complete catalog, including furniture, wallpapers, clothing, parasols, and carpets. Many of these items are quite rare or were only programmed to be accessible through an official Nintendo-affiliated distribution such as a magazine or online contest.
Beyond official distributions, it's clear Animal Crossings' designer, Katsuya Eguchi, wanted players to cooperate to complete their catalogs. The game incentivized trading items between towns by assigning one “native fruit” (Apple, Orange, Cherry, Peach, or Pear) and randomly making a subset of items harder to find than others depending on a hidden “item group” variable (either A, B, or C).
Items could be exchanged between players when one player visits another town, but this required physically bringing your memory card to another players' GameCube. The GameCube might have come with a handle, but the 'cube wasn't exactly a portable console. Sharing a physical space isn't something you can do with everyone or on a regular basis.
So what did Katsuya Eguchi design for Animal Crossing? To allow for item distributions from magazines and contests and to make player-to-player item sharing easier Animal Crossing included a feature called “secret codes”.
This feature worked by allowing players to exchange 28-character codes with Tom Nook for items. Players could also generate codes for their friends to “send” an item from their own game to a different town. Codes could be shared by writing them on a paper note, instant message, or text message.
Huntr R. explaining how “secret codes” are implemented. A
surprising amount of cryptography!
This Reddit comment thread from the GameCube subreddit was the initial inspiration for this entire series. The post is about someone's niece who just started playing Animal Crossing for the first time. The Redditor asked folks to send items to their nieces' town using the secret code system.
This ended up surprising many folks that this system still worked in a game that was over 23 years old! For reference, Nintendo Wi-Fi Connection and Nintendo Network were only available for 8 and 13 years respectively. Below are a handful of the comments from the thread:
- “That's still online???”
- “It was online???!”
- “For real does this still work lol?”
- “...Was it ever online?”
secret code for my favorite Animal Crossing NES game Wario's Woods:
Xvl5HeG&C9prXu
IWhuzBinlVlqOg
It's hard not to take these comments as indicators that something is very wrong with internet-connected software today. What had to go wrong for a system continuing to work to be met with surprise? Many consumers' experience with software products today is that they become useless e-waste after some far-away service is discontinued a few years after purchase.
My intuition from this is that software that requires centralized servers and infrastructure to function will have shorter lifetimes than software which is offline or only opportunistically uses online functionality.
I don't think this is particularly insightful, more dependencies always means less resilience. But if we're building software for human connection then the software should optimally only be limited by the availability of humans to connect.
Animal Crossings' secret code system is far from perfect. The system is easily abusable, as the same secret codes can be reused over-and-over by the same user to duplicate items without ever expiring. The only limit was that 3 codes could be used per day.
Secret codes are tied to a specific town and recipient name, but even this stopgap can be defeated by setting your name and town name to specific values to share codes across many different players.
Not long after Animal Crossing's release the secret code algorithm was reverse-engineered so secret codes for any item could be created for any town and recipient name as if they came from an official Nintendo distribution. This was possible because the secret code system relied on "security through obscurity".
Could centralization be the answer to preventing these abuses?
The most interesting property that a centralized authority approach provides is global consensus: forcing everyone to play by the same rules. By storing the “single source-of-truth” a central authority is able to prevent abuses like the ones mentioned above.
For example, a centralized “secret code issuing server” could generate new unique codes per-use and check each code's validity against a database to prevent users from generating their own illegitimate codes or codes being re-used multiple times.
The problem with centralized consensus is it tends to be viral to cover the entire software state. A centralized server can generate codes perfectly, but how can that same server know that the items you're exchanging for codes were obtained legitimately? To know this the server would also need to track item legitimacy, leading to software which requires an internet connection to operate.
This is optimal from a correctness perspective, but as was noted earlier, I suspect that if such a server was a mandatory part of the secret code system in Animal Crossing that the system would likely not be usable today.
This seems like a trade-off, which future would you rather have?
If I were designing Animal Crossings' secret code system with modern hardware, what would it look like? How can we keep the offline fall-back while providing consensus and being less abusable, especially for official distributions.
I would likely use a public-key cryptographic system for official distributions, embedding a certificate that could be used to “verify” that specific secret codes originated from the expected centralized entity. Codes that are accepted would be recorded to prevent reusing the same code multiple times in the same town. Using public-key cryptography prevents the system from being reverse-engineered to distribute arbitrary items until the certificate private key was cracked.
For sharing items between players I would implement a system where each town generated a public and private key and the public key was shared to other towns whenever the software was able to, such as when a player visited the other town. Players would only be able to send items to players that they have visited (which for Animal Crossing required physical presence, more on this later!)
Each sender could store a nonce value for each potential recipient. Embedding that nonce into the secret code would allow the recipients' software to verify that the specific code hadn't been used yet. The nonce wouldn't have to be long to avoid simple reusing of codes.
Both above systems would require much more data to be embedded into each “secret code” compared to the 28-character codes from the GameCube. For this I would use QR codes to embed over 2KB of data into a single QR code. Funnily enough, Animal Crossing New Leaf and onwards use QR code technology for players to share design patterns.
This design is still abusable if users can modify their software or hardware but doesn't suffer from the trivial-to-exploit flaws of Animal Crossing's secret code system.
What if we could have the best of both worlds: we want consensus that is both global and decentralized. At least today, we are out of luck.
Decentralized global consensus is technologically feasible, but the existing solutions (mostly blockchains) are expensive (both in energy and capital) and can't handle throughput on any sort of meaningful scale.
There are many other decentralized consensus systems that are able to form “pockets” of useful peer-to-peer consensus using a fraction of the resources, such as email, BitTorrent, ActivityPub, and Nostr. These systems are only possible by adding some centralization or by only guaranteeing local consensus.
Obviously global consensus is important for certain classes of software like financial, civics, and infrastructure, but I wonder how the necessity of consensus in software changes for software with different risk profiles.
For software which has fewer risks associated with misuse is there as much need for global consensus? How can software for connection be designed to reduce risk and require less consensus to be effective? If global consensus and centralized servers become unnecessary, can we expect software for connection to be usable on much longer timescales, essentially for as long as there are users?
Quansight Labs Blog: PEP 517 build system popularity [Planet Python]
Analysis of PEP 517 build backends used in 8000 top PyPI packages
Leveraging Tmux and Screen for Advanced Session Management [Linux Journal - The Original Magazine of the Linux Community]
In the realm of Linux, efficiency and productivity are not just goals but necessities. One of the most powerful tools in a power user's arsenal are terminal multiplexers, specifically tmux and Screen. These tools enhance the command line interface experience by allowing users to run multiple terminal sessions within a single window, detach them and continue working in the background, and reattach them at will. This guide delves into the world of tmux and Screen, showing you how to harness their capabilities to streamline your workflow and boost your productivity.
A terminal multiplexer is a software application that allows multiple terminal sessions to be accessed and controlled from a single screen. Users can switch between these sessions seamlessly, without the need to open multiple terminal windows. This capability is particularly useful in remote session management, where sessions need to remain active even when the user is disconnected.
Key Features and BenefitsScreen, developed by GNU, has been a staple among system administrators and power users for decades. It provides the basic functionality needed to manage multiple windows in a single session.
Installing ScreenTo install Screen on Ubuntu or Debian:
sudo apt-get install screen
On Red Hat or CentOS:
sudo yum install screen
On Fedora:
sudo dnf install screen
Enhancing System Security and Efficiency through User and Group Management [Linux Journal - The Original Magazine of the Linux Community]
Linux, a powerhouse in the world of operating systems, is renowned for its robustness, security, and scalability. Central to these strengths is the effective management of users and groups, which ensures secure and efficient access to system resources. This guide delves into the intricacies of user and group management, providing a foundation for both newcomers and seasoned administrators to enhance their Linux system administration skills.
In Linux, a user is anyone who interacts with the operating system, be it a human or a software agent. Users can be categorized into three types:
Root User: Also known as the superuser, the root user has unfettered access to the system. This account can modify any file, run privileged commands, and has administrative rights over other user accounts.
System Users: These accounts are created to run specific services such as web servers or database systems. Typically, these users do not have login capabilities and are used to segregate duties for security purposes.
Regular Users: These are the typical accounts created for actual people using the system. They have more limited privileges compared to the root user, which can be adjusted through group memberships or permission changes.
Each user is uniquely identified by a User ID (UID). The UID for the root user is always 0, while UIDs for other users usually start from 1000 upwards by default.
A group in Linux is a collection of users who share certain privileges and access rights. Groups make it easier to manage permissions for a collection of users, rather than having to assign permissions individually.
Groups are identified by a Group ID (GID), similar to how users are identified by UIDs.
Linux offers a suite of command-line tools for managing users and groups:
What Do Linux Kernel Developers Think of Rust? [Slashdot: Linux]
Keynotes at this year's FOSDEM included free AI models and systemd, reports Heise.de — and also a progress report from Miguel Ojeda, supervisor of the Rust integration in the Linux kernel. Only eight people remain in the core team around Rust for Linux... Miguel Ojeda therefore launched a survey among kernel developers, including those outside the Rust community, and presented some of the more important voices in his FOSDEM talk. The overall mood towards Rust remains favorable, especially as Linus Torvalds and Greg Kroah-Hartman are convinced of the necessity of Rust integration. This is less about rapid progress and more about finding new talent for kernel development in the future. The reaction was mostly positive, judging by Ojeda's slides: - "2025 will be the year of Rust GPU drivers..." — Daniel Almedia - "I think the introduction of Rust in the kernel is one of the most exciting development experiments we've seen in a long time." — Andrea Righi - "[T]he project faces unique challenges. Rust's biggest weakness, as a language, is that relatively few people speak it. Indeed, Rust is not a language for beginners, and systems-level development complicates things even more. That said, the Linux kernel project has historically attracted developers who love challenging software — if there's an open source group willing to put the extra effort for a better OS, it's the kernel devs." — Carlos Bilbao - "I played a little with [Rust] in user space, and I just absolutely hate the cargo concept... I hate having to pull down other code that I do not trust. At least with shared libraries, I can trust a third party to have done the build and all that... [While Rust should continue to grow in the kernel], if a subset of C becomes as safe as Rust, it may make Rust obsolete..." Steven Rostedt Rostedt wasn't sure if Rust would attract more kernel contributors, but did venture this opinion. "I feel Rust is more of a language that younger developers want to learn, and C is their dad's language." But still "contention exists within the kernel development community between those pro-Rust and -C camps," argues The New Stack, citing the latest remarks from kernel maintainer Christoph Hellwig (who had earlier likened the mixing of Rust and C to cancer). Three days later Hellwig reiterated his position again on the Linux kernel mailing list: "Every additional bit that another language creeps in drastically reduces the maintainability of the kernel as an integrated project. The only reason Linux managed to survive so long is by not having internal boundaries, and adding another language completely breaks this. You might not like my answer, but I will do everything I can do to stop this. This is NOT because I hate Rust. While not my favourite language it's definitively one of the best new ones and I encourage people to use it for new projects where it fits. I do not want it anywhere near a huge C code base that I need to maintain." But the article also notes that Google "has been a staunch supporter of adding Rust to the kernel for Linux running in its Android phones." The use of Rust in the kernel is seen as a way to avoid memory vulnerabilities associated with C and C++ code and to add more stability to the Android OS. "Google's wanting to replace C code with Rust represents a small piece of the kernel but it would have a huge impact since we are talking about billions of phones," Ojeda told me after his talk. In addition to Google, Rust adoption and enthusiasm for it is increasing as Rust gets more architectural support and as "maintainers become more comfortable with it," Ojeda told me. "Maintainers have already told me that if they could, then they would start writing Rust now," Ojeda said. "If they could drop C, they would do it...." Amid the controversy, there has been a steady stream of vocal support for Ojeda. Much of his discussion also covered statements given by advocates for Rust in the kernel, ranging from lead developers of the kernel and including Linux creator Linus Torvalds himself to technology leads from Red Hat, Samsung, Google, Microsoft and others.
Read more of this story at Slashdot.
Mixing Rust and C in Linux Likened To Cancer By Kernel Maintainer [Slashdot: Linux]
A heated dispute has erupted in the Linux kernel community over the integration of Rust code, with kernel maintainer Christoph Hellwig likening multiple programming languages to "cancer" for the project's maintainability. The conflict centers on a proposed patch enabling Rust-written device drivers to access the kernel's DMA API, which Hellwig strongly opposed. While the dispute isn't about Rust itself, Hellwig argues that maintaining cross-language codebases severely compromises Linux's integrated nature. From a report: "Don't force me to deal with your shiny language of the day," he [Hellwig] wrote. "Maintaining multi-language projects is a pain I have no interest in dealing with. If you want to use something that's not C, be that assembly or Rust, you write to C interfaces and deal with the impedance mismatch yourself as far as I'm concerned." This resistance follows the September departure of Microsoft engineer Wedson Almeida Filho from the Rust for Linux project, citing "nontechnical nonsense."
Read more of this story at Slashdot.
LibreOffice 25.2, the office suite that meets today’s user needs [Press Releases Archives - The Document Foundation Blog]
The new major release provides many user interface and accessibility improvements, plus the usual interoperability features
Berlin, 6 February 2025 – LibreOffice 25.2, the new major release of the free, volunteer-supported office suite for Windows (Intel, AMD and ARM), macOS (Apple Silicon and Intel) and Linux is available on our download page. LibreOffice is the best office suite for users who want to retain control over their individual software and documents, thereby protecting their privacy and digital life from the commercial interference and the lock-in strategies of Big Tech.
LibreOffice is the only office suite designed to meet the actual needs of the user – not just their eyes. It offers a range of interface options to suit different user habits, from traditional to modern, and makes the most of different screen sizes, optimising the space available to put the maximum number of features just a click or two away.
It is also the only software for creating documents (that may contain personal or confidential information) that respects the user’s privacy, ensuring that the user can decide if and with whom to share the content they create, thanks to the standard and open format that is not used as a lock-in tool, forcing periodic software updates. All this with a feature set that is comparable to the leading software on the market and far superior to that of any competitor.
What makes LibreOffice unique is the LibreOffice Technology Platform, the only one on the market that allows the consistent development of desktop, mobile and cloud versions – including those provided by companies in the ecosystem – capable of producing identical and fully interoperable documents based on the two available ISO standards: the open ODF or Open Document Format (ODT, ODS and ODP) and the proprietary Microsoft OOXML (DOCX, XLSX and PPTX). The latter hides a huge number of artificial (and unnecessary) lock-in complexities that create problems for users convinced they are using a standard format.
End users can get first-level technical support from volunteers on the user mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org. LibreOffice Writer Guide can be downloaded from https://books.libreoffice.org/en/.
New Features of LibreOffice 25.2
PRIVACY
CORE/GENERAL
WRITER
CALC
IMPRESS & DRAW
USER INTERFACE
ACCESSIBILITY
SCRIPTFORGE LIBRARIES
Contributors to LibreOffice 25.2
A total of 176 developers contributed to the new features in LibreOffice 25.2: 47% of the code commits came from 50 developers employed by ecosystem companies – Collabora and allotropia – and other organisations, 31% from seven developers at The Document Foundation, and the remaining 22% from 119 individual volunteer developers.
An additional 189 volunteers have committed 771,263 localized strings in 160 languages, representing hundreds of people working on translations. LibreOffice 25.2 is available in 120 languages, more than any other desktop software, making it available to over 5.5 billion people in their native language. In addition, over 2.4 billion people speak one of these 120 languages as a second language.
LibreOffice for Enterprises
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: www.libreoffice.org/download/libreoffice-in-business/.
Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and improves the LibreOffice Technology platform. Products based on LibreOffice Technology are available for all major desktop operating systems (Windows, macOS, Linux and ChromeOS), mobile platforms (Android and iOS) and the cloud.
Migrations to LibreOffice
The Document Foundation publishes a migration protocol to help companies move from proprietary office suites to LibreOffice, based on the deployment of an LTS (long-term support) enterprise-optimised version of LibreOffice, plus migration consulting and training provided by certified professionals who offer value-added solutions consistent with proprietary offerings. Reference: www.libreoffice.org/get-help/professional-support/.
In fact, LibreOffice’s mature code base, rich feature set, strong support for open standards, excellent compatibility and LTS options from certified partners make it the ideal solution for organisations looking to regain control of their data and break free from vendor lock-in.
Availability of LibreOffice 25.2
LibreOffice 25.2 is available at www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple MacOS 10.15. LibreOffice Technology-based products for Android and iOS are listed here: www.libreoffice.org/download/android-and-ios/.
For users who don’t need the latest features and prefer a version that has undergone more testing and bug fixing, The Document Foundation still maintains the LibreOffice 24.8 family, which includes several months of back-ported fixes. The current release is LibreOffice 24.8.4.
LibreOffice users, free software advocates and community members can support The Document Foundation with a donation at www.libreoffice.org/donate.
[1] Release Notes: wiki.documentfoundation.org/ReleaseNotes/25.2
LibreOffice 24.8.4, optimised for the privacy-conscious user, is available for download [Press Releases Archives - The Document Foundation Blog]
Berlin, 19 December 2024 – LibreOffice 24.8.4, the fourth minor release of the LibreOffice 24.8 family of the free open source, volunteer-supported office suite for Windows (Intel, AMD and ARM), MacOS (Apple and Intel) and Linux, is available at www.libreoffice.org/download.
The release includes over 55 bug and regression fixes over LibreOffice 24.8.3 [1] to improve the stability and robustness of the software, as well as interoperability with legacy and proprietary document formats.
LibreOffice is the only office suite that respects the privacy of the user, ensuring that the user is able to decide if and with whom to share the content they create. It even allows deleting user related info from documents. As such, LibreOffice is the best option for the privacy-conscious office suite user, while offering a feature set comparable to the leading product on the market.
Also, LibreOffice offers a range of interface options to suit different user habits, from traditional to modern, and makes the most of different screen sizes by using all the space available on the desktop to put the maximum number of features just a click or two away.
The biggest advantage over competing products is the LibreOffice Technology engine, the single software platform on which desktop, mobile and cloud versions of LibreOffice – including those from ecosystem companies – are based.
This allows LibreOffice to produce identical and fully interoperable documents based on two ISO standards: the open and neutral Open Document Format (ODT, ODS, ODP) and the closed and fully proprietary Microsoft OOXML (DOCX, XLSX, PPTX), which hides a large amount of artificial complexity, and can cause problems for users who are confident that they are using a true open standard.
End users looking for support can download the LibreOffice 24.8 Getting Started, Writer, Impress, Draw and Math guides from the following link: books.libreoffice.org/. In addition, they can get first-level technical support from volunteers on mailing lists and the Ask LibreOffice website: ask.libreoffice.org.
LibreOffice for Enterprise
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with three or five year backporting of security patches, other dedicated value-added features and Service Level Agreements: www.libreoffice.org/download/libreoffice-in-business/.
Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and improves the LibreOffice Technology platform. Products based on LibreOffice Technology are available for all major desktop operating systems (Windows, macOS, Linux and ChromeOS), mobile platforms (Android and iOS) and the cloud.
The Document Foundation’s migration protocol helps companies move from proprietary office suites to LibreOffice, by installing the LTS (long-term support) enterprise-optimised version of LibreOffice, plus consulting and training provided by certified professionals: www.libreoffice.org/get-help/professional-support/.
In fact, LibreOffice’s mature code base, rich feature set, strong support for open standards, excellent compatibility and LTS options make it the ideal solution for organisations looking to regain control of their data and break free from vendor lock-in.
LibreOffice 24.8.4 availability
LibreOffice 24.8.4 is available from www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 (no longer supported by Microsoft) and Apple MacOS 10.15. Products for Android and iOS are at www.libreoffice.org/download/android-and-ios/.
Users of the LibreOffice 24.2 branch (the last update being 24.2.7), which has recently reached end-of-life, should consider upgrading to LibreOffice 24.8.4, as this is already the most tested version of the program. Early February will see the announcement of LibreOffice 25.2.
LibreOffice users, free software advocates and community members can support The Document Foundation by donating at www.libreoffice.org/donate.
Enterprise deploying LibreOffice can also donate, although the best solution for their needs would be to look for the enterprise optimized versions of the software (with Long Term Support for security and Service Level Agreements to protect their investment) at www.libreoffice.org/download/libreoffice-in-business/.
[1] Fixes in RC1: wiki.documentfoundation.org/Releases/24.8.4/RC1. Fixes in RC2: wiki.documentfoundation.org/Releases/24.8.4/RC2.
Announcement of LibreOffice 24.8.3, the office suite optimised for the privacy-conscious office suite user who wants full control over the information they share [Press Releases Archives - The Document Foundation Blog]
Berlin, 14 November 2024 – LibreOffice 24.8.3, the third minor release of the LibreOffice 24.8 family of the free open source, volunteer-supported office suite for Windows (Intel, AMD and ARM), MacOS (Apple and Intel) and Linux, is available at www.libreoffice.org/download.
The release includes over 80 bug and regression fixes over LibreOffice 24.8.2 [1] to improve the stability and robustness of the software, as well as interoperability with legacy and proprietary document formats. In addition, support for Visio template format VSTX has been added.
LibreOffice is the only office suite that respects the privacy of the user, ensuring that the user is able to decide if and with whom to share the content they create. It even allows deleting user related info from documents. As such, LibreOffice is the best option for the privacy-conscious office suite user, while offering a feature set comparable to the leading product on the market.
Also, LibreOffice offers a range of interface options to suit different user habits, from traditional to modern, and makes the most of different screen sizes by using all the space available on the desktop to put the maximum number of features just a click or two away.
The biggest advantage over competing products is the LibreOffice Technology engine, the single software platform on which desktop, mobile and cloud versions of LibreOffice – including those from ecosystem companies – are based.
This allows LibreOffice to produce identical and fully interoperable documents based on the two ISO standards: the Open Document Format (ODT, ODS, ODP) and the fully proprietary Microsoft OOXML (DOCX, XLSX, PPTX), which hides a large amount of artificial complexity, and can cause problems for users who are confident that they are using a true open standard.
End users looking for support can download the LibreOffice 24.8 Getting Started, Writer and Impress guides from the following link: /books.libreoffice.org/. In addition, they will be able to get first-level technical support from volunteers on mailing lists and the Ask LibreOffice website: ask.libreoffice.org.
LibreOffice for Enterprise
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with three or five year backporting of security patches, other dedicated value-added features and Service Level Agreements: www.libreoffice.org/download/libreoffice-in-business/.
Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and improves the LibreOffice Technology platform. Products based on LibreOffice Technology are available for all major desktop operating systems (Windows, macOS, Linux and ChromeOS), mobile platforms (Android and iOS) and the cloud.
The Document Foundation’s migration protocol helps companies move from proprietary office suites to LibreOffice, by installing the LTS (long-term support) enterprise-optimised version of LibreOffice, plus consulting and training provided by certified professionals: www.libreoffice.org/get-help/professional-support/.
In fact, LibreOffice’s mature code base, rich feature set, strong support for open standards, excellent compatibility and LTS options make it the ideal solution for organisations looking to regain control of their data and break free from vendor lock-in.
LibreOffice 24.8.3 availability
LibreOffice 24.8.3 is available from www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 (no longer supported by Microsoft) and Apple macOS 10.15. Products for Android and iOS are at www.libreoffice.org/download/android-and-ios/.
LibreOffice users, free software advocates and community members can support The Document Foundation by donating at www.libreoffice.org/donate.
Enterprise deploying LibreOffice can also donate, although the best solution for their needs would be to look for the enterprise optimized versions of the software (with Long Term Support for security and Service Level Agreements to protect their investment) at www.libreoffice.org/download/libreoffice-in-business/.
[1] Fixes in RC1: wiki.documentfoundation.org/Releases/24.8.3/RC1. Fixes in RC2: wiki.documentfoundation.org/Releases/24.8.3/RC2.
'I'm Done With Ubuntu' [Slashdot: Linux]
Software developer and prolific blogger Herman Ounapuu, writing in a blog post: I liked Ubuntu. For a very long time, it was the sensible default option. Around 2016, I used the Ubuntu GNOME flavor, and after they ditched the Unity desktop environment, GNOME became the default option. I was really happy with it, both for work and personal computing needs. Estonian ID card software was also officially supported on Ubuntu, which made Ubuntu a good choice for family members. But then something changed. Ounapuu recounts how Ubuntu's bi-annual long-term support releases consistently broke functionality, from minor interface glitches to catastrophic system failures that left computers unresponsive. His breaking point came after multiple problematic upgrades affecting family members' computers, including one that rendered a laptop completely unusable during an upgrade from Ubuntu 20.04 to 22.04. Another incident left a relative's system with broken Firefox shortcuts and duplicate status bar icons after updating Lubuntu 18.04. Canonical's aggressive push of Snap packages has drawn particular criticism. The forced migration of system components from traditional Debian packages to Snaps resulted in compatibility issues, broken desktop shortcuts, and government ID card authentication failures. In one instance, he writes, a Snap-related bug in the GNOME desktop environment severely disrupted workplace productivity, requiring multiple system restarts to resolve. The author has since switched to Fedora, praising its implementation of Flatpak as a superior alternative to Snaps.
Read more of this story at Slashdot.
Red Hat Plans to Add AI to Fedora and GNOME [Slashdot: Linux]
In his post about the future of Fedora Workstation, Christian F.K. Schaller discusses how the Red Hat team plans to integrate AI with IBM's open-source Granite engine to enhance developer tools, such as IDEs, and create an AI-powered Code Assistant. He says the team is also working on streamlining AI acceleration in Toolbx and ensuring Fedora users have access to tools like RamaLama. From the post: One big item on our list for the year is looking at ways Fedora Workstation can make use of artificial intelligence. Thanks to IBMs Granite effort we know have an AI engine that is available under proper open source licensing terms and which can be extended for many different usecases. Also the IBM Granite team has an aggressive plan for releasing updated versions of Granite, incorporating new features of special interest to developers, like making Granite a great engine to power IDEs and similar tools. We been brainstorming various ideas in the team for how we can make use of AI to provide improved or new features to users of GNOME and Fedora Workstation. This includes making sure Fedora Workstation users have access to great tools like RamaLama, that we make sure setting up accelerated AI inside Toolbx is simple, that we offer a good Code Assistant based on Granite and that we come up with other cool integration points. "I'm still not sure how I feel about this approach," writes designer/developer and blogger, Bradley Taunt. "While IBM Granite is an open source model, I still don't enjoy so much artificial 'intelligence' creeping into core OS development. This also isn't something optional on the end-users side, like a desktop feature or package. This sounds like it's going to be built directly into the core system." "Red Hat has been pushing hard towards AI and my main concern is having this influence other operating system dev teams. Luckily things seems AI-free in BSD land. For now, at least."
Read more of this story at Slashdot.
Popular Linux Orgs Freedesktop, Alpine Linux Are Scrambling For New Web Hosting [Slashdot: Linux]
An anonymous reader quotes a report from Ars Technica: In what is becoming a sadly regular occurrence, two popular free software projects, X.org/Freedesktop.org and Alpine Linux, need to rally some of their millions of users so that they can continue operating. Both services have largely depended on free server resources provided by Equinix (formerly Packet.net) and its Metal division for the past few years. Equinix announced recently that it was sunsetting its bare-metal sales and services, or renting out physically distinct single computers rather than virtualized and shared hardware. As reported by the Phoronix blog, both free software organizations have until the end of April to find and fund new hosting, with some fairly demanding bandwidth and development needs. An issue ticket on Freedesktop.org's GitLab repository provides the story and the nitty-gritty needs of that project. Both the X.org foundation (home of the 40-year-old window system) and Freedesktop.org (a shared base of specifications and technology for free software desktops, including Wayland and many more) used Equinix's donated space. [...] Alpine Linux, a small, security-minded distribution used in many containers and embedded devices, also needs a new home quickly. As detailed in its blog, Alpine Linux uses about 800TB of bandwidth each month and also needs continuous integration runners (or separate job agents), as well as a development box. Alpine states it is seeking co-location space and bare-metal servers near the Netherlands, though it will consider virtual machines if bare metal is not feasible.
Read more of this story at Slashdot.
Debian Package Dependency Management: Handling Dependencies [Linux Journal - The Original Magazine of the Linux Community]
Debian-based Linux distributions, such as Ubuntu, Linux Mint, and Debian itself, rely on robust package management systems to install, update, and remove software efficiently. One of the most critical aspects of package management is handling dependencies—ensuring that all required libraries and packages are present for an application to function correctly.
Dependency management is crucial for maintaining system stability, avoiding broken packages, and ensuring software compatibility. This article explores how Debian handles package dependencies, how to manage them effectively, and how to troubleshoot common dependency-related issues.
Debian uses the .deb
package format, which contains precompiled binaries, configuration files, and metadata describing the package, including its dependencies. The primary tools for handling Debian packages are:
dpkg: A low-level package manager used for installing, removing, and querying .deb
packages.
APT (Advanced Package Tool): A high-level package management system that resolves dependencies automatically and fetches required packages from repositories.
Without proper dependency handling, installing a single package could become a nightmare of manually finding and installing supporting files. APT streamlines this process by automating dependency resolution.
Dependencies ensure that an application has all the necessary libraries and components to function correctly. In Debian, dependencies are defined in the package’s control
file. These dependencies are categorized as follows:
Depends: Mandatory dependencies required for the package to work.
Recommends: Strongly suggested dependencies that enhance functionality but are not mandatory.
Suggests: Optional packages that provide additional features.
Breaks: Indicates that a package is incompatible with certain versions of another package.
Conflicts: Prevents the installation of two incompatible packages.
Provides: Allows one package to act as a substitute for another (useful for virtual packages).
For example, if you attempt to install a software package using APT, it will automatically fetch and install all required dependencies based on the Depends
field.
APT simplifies dependency management by automatically resolving and installing required packages. Some essential APT commands include:
Updating package lists: sudo apt update
Simplifying User Accounts and Permissions Management in Linux [Linux Journal - The Original Magazine of the Linux Community]
Linux, renowned for its robustness and security, is a powerful multi-user operating system that allows multiple people to interact with the same system resources without interfering with each other. Proper management of user accounts and permissions is crucial to maintaining the security and efficiency of a Linux system. This article provides an exploration of how to effectively manage user accounts and permissions in Linux.
User accounts are essential for individual users to access and operate Linux systems. They help in resource allocation, setting privileges, and securing the system from unauthorized access. There are mainly two types of user accounts:
Additionally, Linux systems also include various system accounts that are used to run services such as web servers, databases, and more.
Creating a user account in Linux can be accomplished with the useradd
or adduser
commands. The adduser
command is more interactive and user-friendly than useradd
.
sudo adduser newusername
This command creates a new user account and its home directory with default configuration files.
Setting user attributespasswd
command.useradd -d /home/newusername newusername
.useradd -s /bin/bash newusername
.usermod
. For example, sudo usermod -s /bin/zsh username
changes the user's default shell to zsh.userdel -r username
.In Linux, every file and directory has associated access permissions which determine who can read, write, or execute them.
Facebook Flags Linux Topics As 'Cybersecurity Threats' [Slashdot: Linux]
Facebook has banned posts mentioning Linux-related topics, with the popular Linux news and discussion site, DistroWatch, at the center of the controversy. Tom's Hardware reports: A post on the site claims, "Facebook's internal policy makers decided that Linux is malware and labeled groups associated with Linux as being 'cybersecurity threats.' We tried to post some blurb about distrowatch.com on Facebook and can confirm that it was barred with a message citing Community Standards. DistroWatch says that the Facebook ban took effect on January 19. Readers have reported difficulty posting links to the site on this social media platform. Moreover, some have told DistroWatch that their Facebook accounts have been locked or limited after sharing posts mentioning Linux topics. If you're wondering if there might be something specific to DistroWatch.com, something on the site that the owners/operators perhaps don't even know about, for example, then it seems pretty safe to rule out such a possibility. Reports show that "multiple groups associated with Linux and Linux discussions have either been shut down or had many of their posts removed." However, we tested a few other Facebook posts with mentions of Linux, and they didn't get blocked immediately. Copenhagen-hosted DistroWatch says it has tried to appeal against the Community Standards-triggered ban. However, they say that a Facebook representative said that Linux topics would remain on the cybersecurity filter. The DistroWatch writer subsequently got their Facebook account locked... DistroWatch points out the irony at play here: "Facebook runs much of its infrastructure on Linux and often posts job ads looking for Linux developers." UPDATE: Facebook has admited they made a mistake and stopped blocking the posts.
Read more of this story at Slashdot.
Facebook Admits Linux-Post Crackdown Was 'In Error', Fixes Moderation Error [Slashdot: Linux]
Tom's Hardware reports: Facebook's heavy-handed censorship of Linux groups and topics was "in error," the social media juggernaut has admitted. Responding to reports earlier this week, sparked by the curious censorship of the eminently wholesome DistroWatch, Facebook contacted PCMag to say that it had made a mistake and that the underlying issue had been rectified. "This enforcement was in error and has since been addressed. Discussions of Linux are allowed on our services," said a Meta rep to PCMag. That is the full extent of the statement reproduced by the source... Copenhagen-hosted DistroWatch says it has appealed against the Community Standards-triggered ban shortly after it noticed it was in effect (January 19). PCMag received the Facebook admission of error on January 28. The latest statement from DistroWatch, which now prefers posting on Mastodon, indicates that Facebook has lifted the DistroWatch links ban. More details from PCMag: Meta didn't say what caused the crackdown in the first place. But the company has been revamping some of its content moderation and plans to replace its fact-checking methodology with a user-driven Community Notes, similar to X. "We're also going to change how we enforce our policies to reduce the kind of mistakes that account for the vast majority of the censorship on our platforms," the company said earlier this month, in another irony. "Up until now, we have been using automated systems to scan for all policy violations, but this has resulted in too many mistakes and too much content being censored that shouldn't have been," Meta added in the same post.
Read more of this story at Slashdot.
Android 16's Linux Terminal Runs Doom [Slashdot: Linux]
Google is enhancing Android 16's Linux Terminal app to support graphical Linux applications, so Android Authority decided to put it to the test by running Doom. From the report: The Terminal app first appeared in the Android 15 QPR2 beta as a developer option, and it still remains locked behind developer settings. Since its initial public release, Google pushed a few changes that fixed issues with the installation process and added a settings menu to resize the disk, forward ports, and backup the installation. However, the biggest changes the company has been working on, which include adding hardware acceleration support and a full graphical environment, have not been pushed to any public releases. Thankfully, since Google is working on this feature in the open, it's possible to simply compile a build of AOSP with these changes added in. This gives us the opportunity to trial upcoming features of the Android Linux Terminal app before a public release. To demonstrate, we fired up the Linux Terminal on a Pixel 9 Pro, tapped a new button on the top right to enter the Display activity, and then ran the 'weston' command to open up a graphical environment. (Weston is a reference implementation of a Wayland compositor, a modern display server protocol.) We also went ahead and enabled hardware acceleration beforehand as well as installed Chocolate Doom, a source port of Doom, to see if it would run. Doom did run, as you can see below. It ran well, which is no surprise considering Doom can run on literal potatoes. There wasn't any audio because an audio server isn't available yet, but audio support is something that Google is still working on.
Read more of this story at Slashdot.
Exploring LXC Containerization for Ubuntu Servers [Linux Journal - The Original Magazine of the Linux Community]
In the world of modern software development and IT infrastructure, containerization has emerged as a transformative technology. It offers a way to package software into isolated environments, making it easier to deploy, scale, and manage applications. While Docker is the most popular containerization technology, there are other solutions that cater to different use cases and needs. One such solution is LXC (Linux Containers), which offers a more full-fledged approach to containerization, akin to lightweight virtual machines.
In this guide, we will explore how LXC works, how to set it up on Ubuntu Server, and how to leverage it for efficient and scalable containerization. Whether you're looking to run multiple isolated environments on a single server, or you want a lightweight alternative to virtualization, LXC can meet your needs. By the end of this article, you will have the knowledge to deploy, manage, and secure LXC containers on your Ubuntu Server setup.
LXC (Linux Containers) is an operating system-level virtualization technology that allows you to run multiple isolated Linux systems (containers) on a single host. Unlike traditional virtualization, which relies on hypervisors to emulate physical hardware for each virtual machine (VM), LXC containers share the host’s kernel while maintaining process and file system isolation. This makes LXC containers lightweight and efficient, with less overhead compared to VMs.
LXC offers a more traditional way of containerizing entire operating systems, as opposed to application-focused containerization solutions like Docker. While Docker focuses on packaging individual applications and their dependencies into containers, LXC provides a more complete environment that behaves like a full operating system.
Efficient Text Processing in Linux: Awk, Cut, Paste [Linux Journal - The Original Magazine of the Linux Community]
In the world of Linux, the command line is an incredibly powerful tool for managing and manipulating data. One of the most common tasks that Linux users face is processing and extracting information from text files. Whether it's log files, configuration files, or even data dumps, text processing tools allow users to handle these files efficiently and effectively.
Three of the most fundamental and versatile text-processing commands in Linux are awk
, cut
, and paste
. These tools enable you to extract, modify, and combine data in a way that’s quick and highly customizable. While each of these tools has a distinct role, together they offer a robust toolkit for handling various types of text-based data. In this article, we will explore each of these tools, showcasing their capabilities and providing examples of how they can be used in day-to-day tasks.
cut
CommandThe cut
command is one of the simplest yet most useful text-processing tools in Linux. It allows users to extract sections from each line of input, based on delimiters or character positions. Whether you're working with tab-delimited data, CSV files, or any structured text data, cut
can help you quickly extract specific fields or columns.
The purpose of cut
is to enable users to cut out specific parts of a file. It's highly useful for dealing with structured text like CSVs, where each line represents a record and the fields are separated by a delimiter (e.g., a comma or tab).
cut -d [delimiter] -f [fields] [file]
-d [delimiter]
: This option specifies the delimiter, which is the character that separates fields in the text. By default, cut
treats tabs as the delimiter.-f [fields]
: This option is used to specify which fields you want to extract. Fields are numbered starting from 1.[file]
: The name of the file you want to process.Suppose you have a CSV file called data.csv
with the following content:
Name,Age,Location Alice,30,New York Bob,25,San Francisco Charlie,35,Boston
To extract the "Name" and "Location" columns, you would use:
cut -d ',' -f 1,3 data.csv
This will output:
Name,Location Alice,New York Bob,San Francisco Charlie,Boston
How to Configure Network Interfaces with Netplan on Ubuntu [Linux Journal - The Original Magazine of the Linux Community]
Netplan is a modern network configuration tool introduced in Ubuntu 17.10 and later adopted as the default for managing network interfaces in Ubuntu 18.04 and beyond. With its YAML-based configuration files, Netplan simplifies the process of managing complex network setups, providing a seamless interface to underlying tools like systemd-networkd and NetworkManager.
In this guide, we’ll walk you through the process of configuring network interfaces using Netplan, from understanding its core concepts to troubleshooting potential issues. By the end, you’ll be equipped to handle basic and advanced network configurations on Ubuntu systems.
Netplan serves as a unified tool for network configuration, allowing administrators to manage networks using declarative YAML files. These configurations are applied by renderers like:
systemd-networkd: Ideal for server environments.
NetworkManager: Commonly used in desktop setups.
The key benefits of Netplan include:
Simplicity: YAML-based syntax reduces complexity.
Consistency: A single configuration file for all interfaces.
Flexibility: Supports both simple and advanced networking scenarios like VLANs and bridges.
Before diving into Netplan, ensure you have the following:
A supported Ubuntu system (18.04 or later).
Administrative privileges (sudo access).
Basic knowledge of network interfaces and YAML syntax.
Netplan configuration files are stored in /etc/netplan/
. These files typically end with the .yaml
extension and may include filenames like 01-netcfg.yaml
or 50-cloud-init.yaml
.
Backup existing configurations: Before making changes, create a backup with the command:
sudo cp /etc/netplan/01-netcfg.yaml /etc/netplan/01-netcfg.yaml.bak
YAML Syntax Rules: YAML is indentation-sensitive. Always use spaces (not tabs) for indentation.
Here’s how you can configure different types of network interfaces using Netplan.
Step 1: Identify Network InterfacesBefore modifying configurations, identify available network interfaces using:
Navigating Service Management on Debian [Linux Journal - The Original Magazine of the Linux Community]
Managing services effectively is a crucial aspect of maintaining any Linux-based system, and Debian, one of the most popular Linux distributions, is no exception. In modern Linux systems, Systemd has become the dominant init system, replacing traditional options like SysVinit. Its robust feature set, flexibility, and speed make it the preferred choice for system and service management. This article dives into Systemd, exploring its functionality and equipping you with the knowledge to manage services confidently on Debian.
Systemd is an init system and service manager for Linux operating systems. It is responsible for initializing the system during boot, managing system processes, and handling dependencies between services. Systemd’s design emphasizes parallelization, speed, and a unified approach to managing services and logging.
Key Features of Systemd:Parallelized Service Startup: Systemd starts services in parallel whenever possible, improving boot times.
Unified Logging with journald: Centralized logging for system events and service output.
Consistent Configuration: Standardized unit files make service management straightforward.
Dependency Management: Ensures that services start and stop in the correct order.
At the core of Systemd’s functionality are unit files. These configuration files describe how Systemd should manage various types of resources or tasks. Unit files are categorized into several types, each serving a specific purpose.
Common Types of Unit Files:Service Units (.service
): Define how services should start, stop, and behave.
Target Units (.target
): Group multiple units into logical milestones, like multi-user.target
or graphical.target
.
Socket Units (.socket
): Manage network sockets for on-demand service activation.
Timer Units (.timer
): Replace cron jobs by scheduling tasks.
Mount Units (.mount
): Handle filesystem mount points.
A typical .service
unit file includes the following sections:
Linux 6.14 Brings Some Systems Faster Suspend and Resume [Slashdot: Linux]
Amid the ongoing Linux 6.14 kernel development cycle, Phoronix spotted a pull request for ACPI updates which "will allow for faster suspend and resume cycles on some systems." Wikipedia defines ACPI as "an open standard that operating systems can use to discover and configure computer hardware components" for things like power management and putting unused hardware components to sleep. Phoronix reports: The ACPI change worth highlighting for Linux 6.14 is switching from msleep() to usleep_range() within the acpi_os_sleep() call in the kernel. This reduces spurious sleep time due to timer inaccuracy. Linux ACPI/PM maintainer Rafael Wysocki of Intel who authored this change noted that it could "spectacularly" reduce the duration of system suspend and resume transitions on some systems... Rafael explained in the patch making the sleep change: "The extra delay added by msleep() to the sleep time value passed to it can be significant, roughly between 1.5 ns on systems with HZ = 1000 and as much as 15 ms on systems with HZ = 100, which is hardly acceptable, at least for small sleep time values." One 2022 bug report complained a Dell XPS 13 using Thunderbolt took "a full 8 seconds to suspend and a full 8 seconds to resume even though no physical devices are connected." In November an Intel engineer posted on the kernel mailing list that the fix gave a Dell XPS 13 a 42% improvement in kernel resume time (from 1943ms to 1127ms).
Read more of this story at Slashdot.
Could New Linux Code Cut Data Center Energy Use By 30%? [Slashdot: Linux]
Two computer scientists at the University of Waterloo in Canada believe changing 30 lines of code in Linux "could cut energy use at some data centers by up to 30 percent," according to the site Data Centre Dynamics. It's the code that processes packets of network traffic, and Linux "is the most widely used OS for data center servers," according to the article: The team tested their solution's effectiveness and submitted it to Linux for consideration, and the code was published this month as part of Linux's newest kernel, release version 6.13. "All these big companies — Amazon, Google, Meta — use Linux in some capacity, but they're very picky about how they decide to use it," said Martin Karsten [professor of Computer Science in the Waterloo's Math Faculty]. "If they choose to 'switch on' our method in their data centers, it could save gigawatt hours of energy worldwide. Almost every single service request that happens on the Internet could be positively affected by this." The University of Waterloo is building a green computer server room as part of its new mathematics building, and Karsten believes sustainability research must be a priority for computer scientists. "We all have a part to play in building a greener future," he said. The Linux Foundation, which oversees the development of the Linux OS, is a founder member of the Green Software Foundation, an organization set up to look at ways of developing "green software" — code that reduces energy consumption. Karsten "teamed up with Joe Damato, distinguished engineer at Fastly" to develop the 30 lines of code, according to an announcement from the university. "The Linux kernel code addition developed by Karsten and Damato was based on research published in ACM SIGMETRICS Performance Evaluation Review" (by Karsten and grad student Peter Cai). Their paper "reviews the performance characteristics of network stack processing for communication-heavy server applications," devising an "indirect methodology" to "identify and quantify the direct and indirect costs of asynchronous hardware interrupt requests (IRQ) as a major source of overhead... "Based on these findings, a small modification of a vanilla Linux system is devised that improves the efficiency and performance of traditional kernel-based networking significantly, resulting in up to 45% increased throughput..."
Read more of this story at Slashdot.
Linux 6.14 Adds Support For The Microsoft Copilot Key Found On New Laptops [Slashdot: Linux]
The Linux 6.14 kernel now maps out support for Microsoft's "Copilot" key "so that user-space software can determine the behavior for handling that key's action on the Linux desktop," writes Phoronix's Michael Larabel. From the report: A change made to the atkbd keyboard driver on Linux now maps the F23 key to support the default copilot shortcut action. The patch authored by Lenovo engineer Mark Pearson explains [...]. Now it's up to the Linux desktop environments for determining what to do if the new Copilot key is pressed. The patch was part of the input updates now merged for the Linux 6.14 kernel.
Read more of this story at Slashdot.
Linux 6.13 Released [Slashdot: Linux]
"Nothing horrible or unexpected happened last week," Linux Torvalds posted tonight on the Linux kernel mailing list, "so I've tagged and pushed out the final 6.13 release." Phoronix says the release has "plenty of fine features": Linux 6.13 comes with the introduction of the AMD 3D V-Cache Optimizer driver for benefiting multi-CCD Ryzen X3D processors. The new AMD EPYC 9005 "Turin" server processors will now default to AMD P-State rather than ACPI CPUFreq for better power efficiency.... Linux 6.13 also brings more Rust programming language infrastructure and more. Phoronix notes that Linux 6.13 also brings "the start of Intel Xe3 graphics bring-up, support for many older (pre-M1) Apple devices like numerous iPads and iPhones, NVMe 2.1 specification support, and AutoFDO and Propeller optimization support when compiling the Linux kernel with the LLVM Clang compiler." And some lucky Linux kernel developers will also be getting a guitar pedal soldered by Linus Torvalds himself, thanks to a generous offer he announced a week ago: For _me_ a traditional holiday activity tends to be a LEGO build or two, since that's often part of the presents... But in addition to the LEGO builds, this year I also ended up doing a number of guitar pedal kit builds ("LEGO for grown-ups with a soldering iron"). Not because I play guitar, but because I enjoy the tinkering, and the guitar pedals actually do something and are the right kind of "not very complex, but not some 5-minute 555 LED blinking thing"... [S]ince I don't actually have any _use_ for the resulting pedals (I've already foisted off a few only unsuspecting victims^Hfriends), I decided that I'm going to see if some hapless kernel developer would want one.... as an admittedly pretty weak excuse to keep buying and building kits... "It may be worth noting that while I've had good success so far, I'm a software person with a soldering iron. You have been warned... [Y]ou should set your expectations along the lines of 'quality kit built by a SW person who doesn't know one end of a guitar from the other.'"
Read more of this story at Slashdot.
LibreOffice 24.2.7 is now available – the last release in the 24.2 branch [Press Releases Archives - The Document Foundation Blog]
Berlin, 31 October 2024 – LibreOffice 24.2.7, the seventh and final planned minor update to the LibreOffice 24.2 branch, is available on our download page for Windows, macOS and Linux.
The release includes over 50 bug and regression fixes over LibreOffice 24.2.6 [1] to improve the stability and robustness of the software, as well as interoperability with legacy and proprietary document formats. LibreOffice 24.2.7 is aimed at mainstream users and enterprise production environments.
LibreOffice is the only office suite with a feature set comparable to the market leader, and offers a range of user interface options to suit all users, from traditional to modern Microsoft Office-style. The UI has been developed to make the most of different screen form factors by optimizing the space available on the desktop to put the maximum number of features just a click or two away.
LibreOffice for Enterprises
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a range of dedicated value-added features, long term support and other benefits such as SLAs: LibreOffice in Business.
Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and contributes to the improvement of the LibreOffice Technology platform.
Availability of LibreOffice 24.2.7
LibreOffice 24.2.7 is available from our download page. Minimum requirements for proprietary operating systems are Windows 7 SP1 and macOS 10.15. Products based on LibreOffice Technology for Android and iOS are listed here: www.libreoffice.org/download/android-and-ios/.
This is planned to be the last minor update to the LibreOffice 24.2 branch, which reaches end-of-life in November. All users are then recommended to upgrade to the LibreOffice 24.8 stable branch.
LibreOffice users, free software advocates and community members can support The Document Foundation by making a donation on our donate page.
[1] Fixes in RC1: wiki.documentfoundation.org/Releases/24.2.7/RC1. Fixes in RC2: wiki.documentfoundation.org/Releases/24.2.7/RC2.
The Document Foundation announces the LibreOffice and Open Source Conference 2024 [Press Releases Archives - The Document Foundation Blog]
Berlin, 25 September 2024 – The LibreOffice and Open Source Conference 2024 will take place in Luxembourg from the 10 to the 12 October 2024. It will be hosted by the Digital Learning Hub and the local campus of 42 Luxembourg at the Terres Rouges buildings in Belval, Esch-sur-Alzette.
This is a key event that brings together the LibreOffice community – supporting the leading FOSS office suite – with a large number of stakeholders: large open source projects, international organizations and representatives from EU institutions and European government departments.
Organized in partnership with the Luxembourg Media & Digital Design Centre (LMDDC), which will host the EdTech track, the event is sponsored by allotropia and Collabora, the two companies contributing more actively to the development of LibreOffice; Passbolt, the Luxembourg made open source password manager for teams; and the Interdisciplinary Centre for Security, Reliability and Trust (SnT) of the University of Luxembourg.
In addition, local partners such as Luxembourg Convention Bureau, LIST, LU-CIX and Luxembourg House of Cybersecurity are supporting the organization of various aspects of the conference.
After the opening session in the morning of the 10 October, which includes institutional presentations from the Minister for Digitalisation, the Ministry of the Economy and the European Commission’s OSPO, there will be tracks about LibreOffice covering development, quality, security, documentation, localization, marketing and enterprise deployments, and tracks about open source covering technologies in education, OSS applications and cybersecurity. Another session will focus on OSPOs (Open Source Programme Officers).
The LibreOffice and Open Source Conference Luxembourg 2024 provides a platform to discuss the latest technical developments, community contributions, and the challenges facing open source software and communities of which TDF, LibreOffice and its community are important components. Professionals, developers, volunteers and users from various fields will share their experiences and collaborate on the future direction of the leading office suite.
Policy and decision makers will find counterparts from all over Europe with which they will be able to exchange ideas and experiences that will help them to promote and implement open source software in public, education and private sector organizations.
On 11 and 12 October, there will also be workshops focusing on different aspects of LibreOffice development, targeted to undergraduate Computer Science students or anyone who knows programming, and wants to become familiar with a large scale real world open source software project. To be able to better support the participants we limited the number of seats to 20 so register for the workshops as soon as possible to reserve your place.
Everyone is encouraged to register and participate in the conference to engage with the open source community, learn from different experts and contribute to meaningful discussions. Please note that, to avoid waste, we will plan for food, drinks and other free items for registered attendees so help us to cater for your needs by registering in time.
LibreOffice 24.2.6 available for download, for the privacy-conscious user [Press Releases Archives - The Document Foundation Blog]
Berlin, 5 September 2024 – LibreOffice 24.2.6, the sixth minor release of the free, volunteer-supported office productivity suite for office environments and individuals, the best choice for privacy-conscious users and digital sovereignty, is available at https://www.libreoffice.org/download for Windows, macOS and Linux.
The release includes over 40 bug and regression fixes over LibreOffice 24.2.5 [1] to improve the stability and robustness of the software, as well as interoperability with legacy and proprietary document formats. LibreOffice 24.2.6 is aimed at mainstream users and enterprise production environments.
LibreOffice is the only office suite with a feature set comparable to the market leader, and offers a range of user interface options to suit all users, from traditional to modern Microsoft Office-style. The UI has been developed to make the most of different screen form factors by optimizing the space available on the desktop to put the maximum number of features just a click or two away.
LibreOffice for Enterprises
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a range of dedicated value-added features, long term support and other benefits such as SLAs: https://www.libreoffice.org/download/libreoffice-in-business/.
Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and contributes to the improvement of the LibreOffice Technology platform.
Availability of LibreOffice 24.2.6
LibreOffice 24.2.6 is available at https://www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Windows 7 SP1 and macOS 10.15. Products based on LibreOffice Technology for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/.
Next week, power users and technology enthusiasts will be able to download LibreOffice 24.8.1, the first minor release of the recently announced new version with many bug and regression fixes. A summary of the new features of the LibreOffice 24.8 ifamily s available on this blog post: https://blog.documentfoundation.org/blog/2024/08/22/libreoffice-248/.
End users looking for support will be helped by the immediate availability of the LibreOffice 24.8 Getting Started Guide, which is available for download from the following link: https://books.libreoffice.org/. In addition, they will be able to get first-level technical support from volunteers on user mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org.
LibreOffice users, free software advocates and community members can support the Document Foundation by making a donation at https://www.libreoffice.org/donate.
[1] Fixes in RC1: https://wiki.documentfoundation.org/Releases/24.2.6/RC1. Fixes in RC2: https://wiki.documentfoundation.org/Releases/24.2.6/RC2.
LibreOffice 24.8, for the privacy-conscious office suite user [Press Releases Archives - The Document Foundation Blog]
The new major release provides a wealth of new features, plus a large number of interoperability improvements
Berlin, 22 August 2024 – LibreOffice 24.8, the new major release of the free, volunteer-supported office suite for Windows (Intel, AMD and ARM), macOS (Apple and Intel) and Linux is available from our download page. This is the second major release to use the new calendar-based numbering scheme (YY.M), and the first to provide an official package for Windows PCs based on ARM processors.
LibreOffice is the only office suite, or if you prefer, the only software for creating documents that may contain personal or confidential information, that respects the privacy of the user – thus ensuring that the user is able to decide if and with whom to share the content they have created. As such, LibreOffice is the best option for the privacy-conscious office suite user, and provides a feature set comparable to the leading product on the market. It also offers a range of interface options to suit different user habits, from traditional to contemporary, and makes the most of different screen sizes by optimising the space available on the desktop to put the maximum number of features just a click or two away.
The biggest advantage over competing products is the LibreOffice Technology engine, the single software platform on which desktop, mobile and cloud versions of LibreOffice – including those provided by ecosystem companies – are based. This allows LibreOffice to offer a better user experience and to produce identical and perfectly interoperable documents based on the two available ISO standards: the Open Document Format (ODT, ODS and ODP), and the proprietary Microsoft OOXML (DOCX, XLSX and PPTX). The latter hides a large amount of artificial complexity, which may create problems for users who are confident that they are using a true open standard.
End users looking for support will be helped by the immediate availability of the LibreOffice 24.8 Getting Started Guide, which is available for download from the Bookshelf. In addition, they will be able to get first-level technical support from volunteers on user mailing lists and the Ask LibreOffice website.
New Features of LibreOffice 24.8
PRIVACY
WRITER
CALC
IMPRESS & DRAW
CHART
ACCESSIBILITY
SECURITY
INTEROPERABILITY
A video showcasing the most significant new features is available on YouTube and PeerTube.
Contributors to LibreOffice 24.8
There are 171 contributors to the new features of LibreOffice 24.8: 57% of code commits come from the 49 developers employed by companies on TDF’s Advisory Board – Collabora, allotropia and Red Hat – and other organisations, another 20% from seven developers at The Document Foundation, and the remaining 23% from 115 individual volunteer developers.
An additional 188 volunteers have committed localized strings in 160 languages, representing hundreds of people actually providing translations. LibreOffice 24.8 is available in 120 languages, more than any other desktop software, making it available to over 5.5 billion people in their native language. In addition, over 2.4 billion people speak one of these 120 languages as a second language (L2).
LibreOffice for Enterprises
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: LibreOffice in Business.
Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and improves the LibreOffice Technology platform. Products based on LibreOffice Technology are available for all major desktop operating systems (Windows, macOS, Linux and ChromeOS), mobile platforms (Android and iOS) and the cloud.
Migrations to LibreOffice
The Document Foundation has developed a migration protocol to help companies move from proprietary office suites to LibreOffice, based on the deployment of an LTS (long-term support) enterprise-optimised version of LibreOffice plus migration consulting and training provided by certified professionals who offer value-added solutions consistent with proprietary offerings. Reference: professional support page.
In fact, LibreOffice’s mature code base, rich feature set, strong support for open standards, excellent compatibility and LTS options from certified partners make it the ideal solution for organisations looking to regain control of their data and break free from vendor lock-in.
Availability of LibreOffice 24.8
LibreOffice 24.8 is available on our download page. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 [1] and Apple MacOS 10.15. LibreOffice Technology-based products for Android and iOS are listed on this page.
For users who don’t need the latest features and prefer a version that has undergone more testing and bug fixing, The Document Foundation maintains the LibreOffice 24.2 family, which includes several months of back-ported fixes. The current release is LibreOffice 24.2.5.
LibreOffice users, free software advocates and community members can support The Document Foundation with a donation on our donate page.
[1] This does not mean that The Document Foundation suggests the use of this operating system, which is no longer supported by Microsoft itself, and as such should not be used for security reasons.
Release Notes: wiki.documentfoundation.org/ReleaseNotes/24.8
Press Kit with Images: nextcloud.documentfoundation.org/s/JEe8MkDZWMmAGmS
Announcement of LibreOffice 24.2.5 Community, optimized for the privacy-conscious user [Press Releases Archives - The Document Foundation Blog]
Berlin, 11 July 2024 – LibreOffice 24.2.5 Community, the fifth minor release of the free, volunteer-supported office productivity suite for office environments and individuals, the best choice for privacy-conscious users and digital sovereignty, is available at www.libreoffice.org/download for Windows, macOS and Linux.
The release includes more than 70 bug and regression fixes over LibreOffice 24.2.4 [1] to improve the stability and robustness of the software, as well as interoperability with legacy and proprietary document formats. LibreOffice 24.2.5 Community is the most advanced version of the office suite and is aimed at power users but can be used safely in other environments.
LibreOffice is the only office suite with a feature set comparable to the market leader. It also offers a range of interface options to suit all users, from traditional to modern Microsoft Office-style, and makes the most of different screen form factors by optimising the space available on the desktop to put the maximum number of features just a click or two away.
LibreOffice for Enterprises
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a range of dedicated value-added features, long term support and other benefits such as SLAs: www.libreoffice.org/download/libreoffice-in-business/
Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and contributes to the improvement of the LibreOffice Technology platform. All products based on that platform share the same approach, optimised for the privacy-conscious user.
Availability of LibreOffice 24.2.5 Community
LibreOffice 24.2.5 Community is available at www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple macOS 10.15. Products based on LibreOffice Technology for Android and iOS are listed here: www.libreoffice.org/download/android-and-ios/
For users who don’t need the latest features and prefer a version that has undergone more testing and bug fixing, The Document Foundation maintains a version with some months of back-ported fixes. The current release has reached the end of life, so users should update to LibreOffice 24.2.5 when the new major release LibreOffice 24.8 becomes available in August.
The Document Foundation does not provide technical support for users, although they can get it from volunteers on user mailing lists and the Ask LibreOffice website: ask.libreoffice.org
LibreOffice users, free software advocates and community members can support the Document Foundation by making a donation at www.libreoffice.org/donate
[1] Fixes in RC1: wiki.documentfoundation.org/Releases/24.2.5/RC1. Fixes in RC2: wiki.documentfoundation.org/Releases/24.2.5/RC2.
LibreOffice 24.2.4 Community available for download [Press Releases Archives - The Document Foundation Blog]
Berlin, 6 June 2024 – LibreOffice 24.2.4 Community, the fourth minor release of the free, volunteer-supported office suite for personal productivity in office environments, is now available at https://www.libreoffice.org/download for Windows, MacOS and Linux.
The release includes over 70 bug and regression fixes over LibreOffice 24.2.3 [1] to improve the stability and robustness of the software. LibreOffice 24.2.4 Community is the most advanced version of the office suite, offering the best features and interoperability with Microsoft Office proprietary formats.
LibreOffice is the only office suite with a feature set comparable to the market leader. It also offers a range of interface options to suit all user habits, from traditional to modern, and makes the most of different screen form factors by optimising the space available on the desktop to put the maximum number of features just a click or two away.
LibreOffice for Enterprises
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: https://www.libreoffice.org/download/libreoffice-in-business/
Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and contributes to the improvement of the LibreOffice Technology platform.
Availability of LibreOffice 24.2.4 Community
LibreOffice 24.2.4 Community is available at https://www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple MacOS 10.15. Products based on LibreOffice Technology for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/
For users who don’t need the latest features and prefer a version that has undergone more testing and bug fixing, The Document Foundation maintains the LibreOffice 7.6 family, which includes several months of back-ported fixes. The current release is LibreOffice 7.6.7 Community, but it will soon be replaced exactly by LibreOffice 24.2.4 when the new major release LibreOffice 24.8 becomes available.
The Document Foundation does not provide technical support for users, although they can get it from volunteers on user mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org
LibreOffice users, free software advocates and community members can support the Document Foundation by making a donation at https://www.libreoffice.org/donate.
[1] Fixes in RC1: https://wiki.documentfoundation.org/Releases/24.2.4/RC1. Fixes in RC2: https://wiki.documentfoundation.org/Releases/24.2.4/RC2.
LVM Logische volumen [linux blogs franz ulenaers]
Een
partitie van het type = "Linux LVM" kan gebruikt worden
voor logische volumen maar ook als "snapshot"
!
Een snapshot kan een exact kopie zijn van een logische
volume dat bevrozen is op een bepaald ogenblik : dit maakt het
mogelijk om consistente backups te maken van logische
volumen
terwijl de logische volumen in gebruik zijn !
Hoe installeren ?
sudo apt-get install lvm2
Cre�er een fysisch volume voor een partitie
commando = �pvcreate� partitie
voorbeeld :
partitie moet van het type = "Linux LVM" zijn !
pvcreate /dev/sda5
cre�er een fysisch volume groep
vgcreate vg_storage partitie
voorbeeld
vgcreate mijnvg /dev/sda5
voeg een logische volume toe in een volume groep
lvcreate -L grootte_in_M/G -n logische_volume_naam volume_groep
voorbeeld :
lvcreate -L 30G -n mijnhome mijnvg
activeer een volume groep
vgchange -a y naam_volume_groep
voorbeeld :
vgchange -a y mijnvg
Mijn fysische en logische volumen
fysische volume
pvcreate /dev/sda1
fysische volume groep
vgcreate mydell /dev/sda1
logische volumen
lvcreate -L 1G -n boot mydell
lvcreate -L 100G -n data mydell
lvcreate -L 50G -n home mydell
lvcreate -L 50G -n root mydell
lvcreate -L 1G swap mydell
Logische volume vergroten/verkleinen
mijn home logische volume vergroten met 1 G
lvextend -L +1G /dev/mapper/mydell-home
let op een logische volume verkleinen kan leiden tot gegevens verlies indien er te weinig plaats is .... !
lvreduce -L -1G /dev/mapper/mydell-home
toon fysische volume
sudo pvs
worden getoond : PV fysische volume , VG volume groep , Fmt formaat (normaal = lvm2) , Attr attribuut, Psize groote PV, PFree vtije plaats
PV VG Fmt Attr PSize PFree
/dev/sda6 mydell lvm2 a-- 920,68g 500,63g
sudo pvs -a
sudo pvs /dev/sda6
Backup instellingen Logische volumen
zie bijgeleverde script LVM_bkup
toon volume groep
sudo vgs
VG #PV #LV #SN Attr VSize VFree
mydell 1 6 0 wz--n- 920,68g 500,63g
toon logische volume(n)
sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
boot mydell -wi-ao---- 952,00m
data mydell -wi-ao---- 100,00g
home mydell -wi-ao---- 93,13g
mintroot mydell -wi-a----- 101,00g
root mydell -wi-ao---- 94,06g
swap mydell -wi-ao---- 30,93g
hoe een logische volume wegdoen ?
een logische volume wegdoen kan enkel maar als de fysische volume niet actief is
dit kan met het vgchange commando
vgchange -a n mydell
lvremove /dev//mijn_volumegroup/naam_logische-volume
voorbeeld :
lvremove /dev/mydell/data
hoe een fysische volume wegdoen ?
vgreduce mydell /dev/sda1
Bijlagen: LVM_bkup (0.8 KLB)
hoe een stick mounten en umounten zonder root te zijn en met je eigen rwx rechten ! [linux blogs franz ulenaers]
hoe usb stick mounten en umounten zonder root te
zijn en met rwx rechten
?
---------------------------------------------------------------------------------------------------------
(hernoem
iedere ulefr01 naar je eigen gebruikersnaam!)
gebruik het 'fatlabel' commando om een volumenaam of label toe te kennen dit als je een vfat bestandensysteem gebruikt op je usb-stick
gebruik het commando 'tune2fs' voor een ext2,3,4
om een volumenaam stick32GB te maken op je usb_stick doe je met het commando :
sudo tune2fs -L stick32GB /dev/sdc1
noot : gebruik voor /dev/sdc1 hier het juiste device !
mogelijk na het mounten zie dmesg messages : Volume was not properly unmounted. Some data may be corrupt. Please run fsck.
gebruik de file system consistency check commando fsck om dit recht te zetten
doe een umount voordat je het commando fsck uitvoer ! (gebruik het juiste device !)
fsck /dev/sdc1
noot: gebruik voor /dev/sdc1 hier je device !
Steek je stick in een usb poort en umount je stick
sudo chown ulefr01:ulefr01 /media/ulefr01/ -R
zet acl op je ext2,3,4 stick (werkt niet op een vfat !)
setfacl -m u:ulefr01:rwx /media/ulefr01
met getfact kun je acl zien
getfacl /media/ulefr01
met het ls commando kun je het resultaat zien
ls /media/ulefr01 -dla
drwxrwx--- 5 ulefr01 ulefr01 4096 okt 1 18:40 /media/ulefr01
noot: indien de �+� aanwezig is dan is acl reeds aanwezig, zoals op volgende lijn :
drwxrwx---+ 5 ulefr01 ulefr01 4096 okt 1 18:40 /media/ulefr01
Steek je stick in een usb poort en kijk of mounten automatisch gebeurd
check rechten van bestaande bestanden en mappen op je stick
ls * -la
indien root of andere rechten reeds aanwezig , herzetten met volgend commando
sudo chown ulefr01:ulefr01 /media/ulefr01/stick32GB -R
cd /media/ulefr01
mkdir mmcblk16G stick32GB stick16gb
voeg een lijn toe voor iedere stick
voorbeelden
LABEL=mmcblk16G /media/ulefr01/mmcblk16G ext4 user,exec,defaults,noatime,acl,noauto 0 0
LABEL=stick32GB /media/ulefr01/stick32GB ext4 user,exec,defaults,noatime,acl,noauto 0 0
LABEL=stick16gb /media/ulefr01/stick16gb vfat user,defaults,noauto 0 0
het volgende moet nu mogelijk zijn :
mount en umount zonder root te zijn
noot : je kunt de umount niet doen als de mount gedaan is door root ! Indien dat het geval is dan moet je eerst de umount met root ; daarna de mount als gebruiker dan kun je ook de umount doen .
zet een nieuw bestand op je stick zonder root te zijn
zet een nieuw map op je stick zonder root te zijn
check of je nieuwe bestanden kunt aanmaken zonder root te zijn
touch test
ls test -la
rm test
Zet acl list [linux blogs franz ulenaers]
noot: meestal mogelijk op linux bestandsystemen : btrfs, ext2, ext3, ext4 en Reiserfs !
Hoe een acl zetten voor ��n gebruiker ?
setfacl -m u:ulefr01:rwx /home/ulefr01
noot: kies ipv ulefr01 hier je eigen gebruikersnaam
Hoe een acl afzetten ?
setfacl -x u:ulefr01 /home/ulefr01
Hoe een acl zetten voor twee of meer gebruikers ?
setfacl -m u:ulefr01:rwx /home/ulefr01
setfacl -m u:myriam:r-x /home/ulefr01
noot: kies ipv myriam je tweede gebruikersnaam; hier heeft myriam geen w write toegang maar wel r read en x exec !
Hoe een lijst opvragen van de ingestelde acl ?
getfacl home/ulefr01
getfacl: Voorafgaande '/' in absolute padnamen worden verwijderd # file: home/ulefr01 # owner: ulefr01 # group: ulefr01 user::rwx user:ulefr01:rwx user:myriam:r-x group::--- mask::rwx other::---
Hoe het resultaat nakijken ?
getfacl home/ulefr01
zie hierboven
ls /home/ulefr01 -dla
drwxrwx---+ ulefr01 ulefr01 4096 okt 1 18:40 /home/ulefr01
zie + sign !
Het beste bestandensysteem (meest performant) op een USB stick , hoe opzetten ? [linux blogs franz ulenaers]
het beste bestandensysteem (meest performant) is ext4
hoe opzetten ?
mkfs.ext4 $device
zet eerst journal af
tune2fs -O ^has_journal $device
doe journaling alleen met data_writeback
tune2fs -o journal_data_writeback $device
gebruik geen reserved spaces en zet het op nul.
tune2fs -m 0 $device
voor bovenstaande 3 acties kan bijgeleverde bash script gebruikt worden :
bestand USBperf
# USBperfext4
echo 'USBperf'
echo '--------'
echo 'ext4 device ?'
read device
echo "device= $device"
echo 'ok ?'
read ok
if [ $ok == ' ' ] || [ $ok == 'n' ] || [ $ok == 'N' ]
then
echo 'nok - dus stoppen'
exit 1
fi
echo "doe : no journaling ! tune2fs -O ^has_journal $device"
tune2fs -O ^has_journal $device
echo "use data mode for filesystem as writeback doe : tune2fs -o journal_data $device"
tune2fs -o journal_data_writeback $device
echo "disable reserved space "
tune2fs -m 0 $device
echo 'gedaan !'
read ok
echo "device= $device"
exit 0
pas bestand /etc/fstab aan voor je USB
gebruik optie �noatime�
Encryptie [linux blogs franz ulenaers]
Met encryptie kan men de gegevens op je computer beveiligen, door de gegevens onleesbaar maken voor de buitenwereld !
Hoe kan men een bestandssysteem encrypteren ?
installeer de volgende open source pakketten :
loop-aes-utils en cryptsetup
apt-get install loop-aes-utils
apt-get install cryptsetup
Hoe een beveiligd bestandsysteem aanmaken ?
Je kunt automatisch je bestandssysteem beschikbaar maken door een volgende entry in je /etc/fstab :
/home/cryptfile /mnt/crypt ext3 auto,encryption=aes,user,exec 0 0
....
Je kunt je encryptie afzetten dmv.App Launchers for Ubuntu 19.04 [Tech Drive-in]
During the transition period, when GNOME Shell and Unity were pretty rough around the edges and slow to respond, 3rd party app launchers were a big deal. Overtime the newer desktop environments improved and became fast, reliable and predictable, reducing the need for a alternate app launchers.
As a result, many third-party app launchers have either slowed down development or simply seized to exist. Ulauncher seems to be the only one to have bucked the trend so far. Synpase and Kupfer on the other hand, though old and not as actively developed anymore, still pack a punch. Since Kupfer is too old school, we'll only be discussing Synapse and Ulauncher here.
sudo dpkg -i ~/Downloads/ulauncher_4.3.2.r8_all.deb
sudo apt-get install -f
A Standalone Video Player for Netflix, YouTube, Twitch on Ubuntu 19.04 [Tech Drive-in]
Snap apps are a godsend. ElectronPlayer is an Electron based app available on Snapstore that doubles up as a standalone media player for video streaming services such as Netflix, YouTube, Twitch, Floatplane etc.
And it works great on Ubuntu 19.04 "disco dingo". From what we've tested, Netflix works like a charm, so does YouTube. ElectronPlayer also has a picture-in-picture mode that let it run above desktop and full screen applications.
sudo snap install electronplayer
Howto Upgrade to Ubuntu 19.04 from Ubuntu 18.10, Ubuntu 18.04 LTS [Tech Drive-in]
As most of you should know already, Ubuntu 19.04 "disco dingo" has been released. A lot of things have changed, see our comprehensive list of improvements in Ubuntu 19.04. Though it is not really necessary to make the jump, I'm sure many here would prefer to have the latest and greatest from Ubuntu. Here's how you upgrade to Ubuntu 19.04 from Ubuntu 18.10 and Ubuntu 18.04.
Upgrading to Ubuntu 19.04 from Ubuntu 18.04 LTS is tricky. There is no way you can make the jump from Ubuntu 18.04 LTS directly to Ubuntu 19.04. For that, you need to upgrade to Ubuntu 18.10 first. Pretty disappointing, I know. But when upgrading an entire OS, you can't be too careful.
And the process itself is not as tedious or time consuming à la Windows. And also unlike Windows, the upgrades are not forced upon you while you're in middle of something.
sudo do-release-upgrade -d
15 Things I Did Post Ubuntu 19.04 Installation [Tech Drive-in]
Ubuntu 19.04, codenamed "Disco Dingo", has been released (and upgrading is easier than you think). I've been on Ubuntu 19.04 since its first Alpha, and this has been a rock solid release as far I'm concerned. Changes in Ubuntu 19.04 are more evolutionary though, but availability of the latest Linux Kernel version 5.0 is significant.
sudo apt update && sudo apt dist-upgrade
sudo apt install gnome-tweaks
sudo apt install ubuntu-restricted-extras
gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'
gsettings reset org.gnome.shell.extensions.dash-to-dock click-action
sudo apt install chrome-gnome-shell
sudo add-apt-repository ppa:system76/pop sudo apt-get update sudo apt install pop-icon-theme pop-gtk-theme pop-gnome-shell-theme sudo apt install pop-wallpapers
Ubuntu 19.04 Gets Newer and Better Wallpapers [Tech Drive-in]
A "Disco Dingo" themed wallpaper was already there. But the latest update bring a bunch of new wallpapers as system defaults on Ubuntu 19.04.
LinuxBoot: A Linux Foundation Project to replace UEFI Components [Tech Drive-in]
UEFI has a pretty bad reputation among many in the Linux community. UEFI unnecessarily complicated Linux installation and distro-hopping in Windows pre-installed machines, for example. Linux Boot project by Linux Foundation aims to replace some firmware functionality like the UEFI DXE phase with Linux components.
What is UEFI?
UEFI is a standard or a specification that replaced legacy BIOS firmware, which was the industry standard for decades. Essentially, UEFI defines the software components between operating system and platform firmware.
UEFI boot has three phases: SEC, PEI and DXE. Driver eXecution Environment or DXE Phase in short: this is where UEFI system loads drivers for configured devices. LinuxBoot will replaces specific firmware functionality like the UEFI DXE phase with a Linux kernel and runtime.
LinuxBoot and the Future of System Startup
"Firmware has always had a simple purpose: to boot the OS. Achieving that has become much more difficult due to increasing complexity of both hardware and deployment. Firmware often must set up many components in the system, interface with more varieties of boot media, including high-speed storage and networking interfaces, and support advanced protocols and security features." writes Linux Foundation.
Look up Uber Time, Price Estimates on Terminal with Uber CLI [Tech Drive-in]
The worldwide phenomenon that is Uber needs no introduction. Uber is an immensely popular ride sharing, ride hailing, company that is valued in billions. Uber is so disruptive and controversial that many cities and even countries are putting up barriers to protect the interests of local taxi drivers.
Enough about Uber as a company. To those among you who regularly use Uber app for booking a cab, Uber CLI could be a useful companion.
sudo apt update sudo apt install nodejs npm npm install uber-cli -g
uber time 'pickup address here'Easy right? I did some testing with places and addresses I'm familiar with, where Uber cabs are fairly common. And I found the results to be fairly accurate. Do test and leave feedback. Uber CLI github page for more info.
uber price -s 'start address' -e 'end address'
UBports Installer for Ubuntu Touch is just too good! [Tech Drive-in]
Even as someone who bought into the Ubuntu Touch hype very early, I was not expecting much from UBports to be honest. But to my pleasent surprise, UBports Installer turned my 4 year old BQ Aquaris E4.5 Ubuntu Edition hardware into a slick, clean, and usable phone again.
Retro Terminal that Emulates Old CRT Display (Ubuntu 18.10, 18.04 PPA) [Tech Drive-in]
We've featured cool-retro-term before. It is a wonderful little terminal emulator app on Ubuntu (and Linux) that adorns this cool retro look of the old CRT displays.
Let the pictures speak for themselves.
sudo add-apt-repository ppa:vantuz/cool-retro-term sudo apt update sudo apt install cool-retro-term
Google's Stadia Cloud Gaming Service, Powered by Linux [Tech Drive-in]
Unless you live under a rock, you must've been inundated with nonstop news about Google's high-octane launch ceremony yesterday where they unveiled the much hyped game streaming platform called Stadia.
Stadia, or Project Stream as it was earlier called, is a cloud gaming service where the games themselves are hosted on Google's servers, while the visual feedback from the game is streamed to the player's device through Google Chrome. If this technology catches on, and if it works just as good as showed in the demos, Stadia could be what the future of gaming might look like.
Ubuntu 19.04 Updates - 7 Things to Know [Tech Drive-in]
Ubuntu 19.04 is scheduled to arrive in another 30 days has been released. I've been using it for the past week or so, and even as a pre-beta, the OS is pretty stable and not buggy at all. Here are a bunch of things you should know about the yet to be officially released Ubuntu 19.04.
Purism: A Linux OS is talking Convergence again [Tech Drive-in]
The hype around "convergence" just won't die it seems. We have heard it from Ubuntu a lot, KDE, even from Google and Apple in fact. But the dream of true convergence, a uniform OS experience across platforms, never really materialised. Even behemoths like Apple and Googled failed to pull it off with their Android/iOS duopoly. Purism's Debian based PureOS wants to change all that for good.
"Purism is beating the duopoly to that dream, with PureOS: we are now announcing that Purism’s PureOS is convergent, and has laid the foundation for all future applications to run on both the Librem 5 phone and Librem laptops, from the same PureOS release", announced Jeremiah Foster, the PureOS director at Purism (by duopoly, he was referring to Android/iOS platforms that dominate smartphone OS ecosystem).
"it turns out that this is really hard to do unless you have complete control of software source code and access to hardware itself. Even then, there is a catch; you need to compile software for both the phone’s CPU and the laptop CPU which are usually different architectures. This is a complex process that often reveals assumptions made in software development but it shows that to build a truly convergent device you need to design for convergence from the beginning."
Komorebi Wallpapers display Live Time & Date, Stunning Parallax Effect on Ubuntu [Tech Drive-in]
Live wallpapers are not a new thing. In fact we have had a lot of live wallpapers to choose from on Linux 10 years ago. Today? Not so much. In fact, be it GNOME or KDE, most desktops today are far less customizable than it used to be. Komorebi wallpaper manager for Ubuntu is kind of a way back machine in that sense.
sudo apt remove komorebi
Snap Install Mario Platformer on Ubuntu 18.10, Ubuntu 18.04 LTS [Tech Drive-in]
Nintendo's Mario needs no introduction. This game defined our childhoods. Now you can install and have fun with an unofficial version of the famed Mario platformer in Ubuntu 18.10 via this Snap package.
sudo snap install mari0
sudo snap connect mari0:joystick
Florida based Startup Builds Ubuntu Powered Aerial Robotics [Tech Drive-in]
Apellix is a Florida based startup that specialises in aerial robotics. They intend to create safer work environments by replacing workers with its task-specific drones to complete high-risk jobs at dangerous/elevated work sites.
Openpilot: An Opensource Alternative to Tesla Autopilot, GM Super Cruise [Tech Drive-in]
Openpilot is an opensource driving agent which at the moment can perform industry-standard functions such as Adaptive Cruise Control and Lane Keeping Assist System for a select few auto manufacturers.
Oranchelo - The icon theme to beat on Ubuntu 18.10 [Tech Drive-in]
OK, that might be an overstatement. But Oranchelo is good, really good.
sudo add-apt-repository ppa:oranchelo/oranchelo-icon-theme sudo apt update sudo apt install oranchelo-icon-theme
11 Things I did After Installing Ubuntu 18.10 Cosmic Cuttlefish [Tech Drive-in]
Have been using "Cosmic Cuttlefish" since its first beta. It is perhaps one of the most visually pleasing Ubuntu releases ever. But more on that later. Now let's discuss what can be done to improve the overall user-experience by diving deep into the nitty gritties of Canonical's brand new flagship OS.
sudo apt install ubuntu-restricted-extras
sudo apt install gnome-tweaks
gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'
gsettings reset org.gnome.shell.extensions.dash-to-dock click-action
sudo add-apt-repository ppa:slgobinath/safeeyes sudo apt update sudo apt install safeeyes
sudo add-apt-repository ppa:system76/pop sudo apt-get update sudo apt install pop-icon-theme pop-gtk-theme pop-gnome-shell-theme sudo apt install pop-wallpapers
sudo gedit /etc/default/apport
RIOT OS: A tiny Opensource OS for the 'Internet of Things' (IoT) [Tech Drive-in]
"RIOT powers the Internet of Things like Linux powers the Internet." RIOT is a small, free and opensource operating system for the memory constrained, low power wireless IoT devices.
IBM, the 6th biggest contributor to Linux Kernel, acquires RedHat for $34 Billion [Tech Drive-in]
The $34 billion all cash deal to purchase opensource pioneer Red Hat is IBM's biggest ever acquisition by far. The deal will give IBM a major foothold in fast-growing cloud computing market and the combined entity could give stiff competition to Amazon's cloud computing platform, AWS. But what about Red Hat and its future?
"Open source is the default choice for modern IT solutions, and I’m incredibly proud of the role Red Hat has played in making that a reality in the enterprise,” said Jim Whitehurst, President and CEO, Red Hat. “Joining forces with IBM will provide us with a greater level of scale, resources and capabilities to accelerate the impact of open source as the basis for digital transformation and bring Red Hat to an even wider audience – all while preserving our unique culture and unwavering commitment to open source innovation."Predicting the future can be tricky. A lot of things can go wrong. But one thing is sure, the acquisition of Red Hat by IBM is nothing like the Oracle - Sun deal. Between them, IBM and Red Hat must have contributed more to the open source community than any other organization.
How to Upgrade from Ubuntu 18.04 LTS to 18.10 'Cosmic Cuttlefish' [Tech Drive-in]
One day left before the final release of Ubuntu 18.10 codenamed "Cosmic Cuttlefish". This is how you make the upgrade from Ubuntu 18.04 to 18.10.
$ sudo apt update && sudo apt dist-upgrade $ sudo apt autoremove
$ sudo gedit /etc/update-manager/release-upgrades
$ sudo do-release-upgrade -d
Meet 'Project Fusion': An Attempt to Integrate Tor into Firefox [Tech Drive-in]
A real private mode in Firefox? A Tor integrated Firefox could just be that. Tor Project is currently working with Mozilla to integrate Tor into Firefox.
"Our ultimate goal is a long way away because of the amount of work to do and the necessity to match the safety of Tor Browser in Firefox when providing a Tor mode. There's no guarantee this will happen, but I hope it will and we will keep working towards it."As If you want to help, Firefox bugs tagged 'fingerprinting' in the whiteboard are a good place to start. Further reading at TOR 'Project Fusion' page.
City of Bern Awards Switzerland's Largest Open Source Contract for its Schools [Tech Drive-in]
In another major win in a span of weeks for the proponents of open source solutions in EU, Bern, the capital of Switzerland, is pushing ahead with its plans to adopt open source tools as its software of choice for all its public schools. If all goes well, some 10,000 students in Switzerland schools could soon start getting their training using an IT infrastructure that is largely open source.
Germany says No to Public Cloud, Chooses Nextcloud's Open Source Solution [Tech Drive-in]
Germany's Federal Information Technology Centre (ITZBund) opts for an on-premise cloud solution which unlike those fancy Public cloud solutions, is completely private and under its direct control.
"Nextcloud is pleased to announce that the German Federal Information Technology Center (ITZBund) has chosen Nextcloud as their solution for efficient and secure file sharing and collaboration in a public tender. Nextcloud is operated by the ITZBund, the central IT service provider of the federal government, and made available to around 300,000 users. ITZBund uses a Nextcloud Enterprise Subscription to gain access to operational, scaling and security expertise of Nextcloud GmbH as well as long-term support of the software."ITZBund employs about 2,700 people that include IT specialists, engineers and network and security professionals. After the successful completion of the pilot, a public tender was floated by ITZBund which eventually selected Nextcloud as their preferred partner. Nextcloud scored high on security requirements and scalability, which it addressed through its unique Apps concept.
LG Makes its webOS Operating System Open Source, Again! [Tech Drive-in]
Not many might remember HP's capable webOS. The open source webOS operating system was HP's answer to Android and iOS platforms. It was slick and very user-friendly from the start, some even considered it a better alternative to Android for Tablets at the time. But like many other smaller players, HP's webOS just couldn't find enough takers, and the project was abruptly ended and sold off of to LG.
Staat New York doet kerstinkopen bij ASML [Computable]
De Amerikaanse staat New York gaat voor een miljard dollar chipmachines aanschaffen bij ASML. De investering maakt deel uit van een tien miljard kostend plan om nabij de Universiteit van Albany een nanotech-complex neer te zetten.
Sogeti mag verder sleutelen aan datawarehouse KB [Computable]
Sogeti wordt de komende drie jaar opnieuw de datapartner van de Koninklijke Bibliotheek (KB). Met een optie op verlenging tot maximaal zes jaar. Het it-bedrijf is sinds 2016 beheerder van het datawarehouse en krijgt nu als...
HPE haalt gen-ai-banden met Nvidia aan [Computable]
Infrastructuurspecialist Hewlett Packard Enterprise (HPE) gaat nauwer samenwerken met ai-hardware en softwareleverancier Nvidia. Samen bieden ze vanaf januari 2024 een krachtige enterprise computingoplossing voor generatieve artificiële intelligentie (gen-ai).
Econocom kondigt internationale tak aan: Gather [Computable]
De Frans-Belgische it-dienstverlener Econocom heeft een apart, internationaal opererend bedrijfsonderdeel opgezet onder de naam Gather. Deze tak bundelt de expertise op het gebied van audio-visuele oplossingen, unified communications en it-producten en -diensten, gericht op grotere organisaties...
Coalitie: verbeter fietsveiligheid met sensoren [Computable]
De pas opgerichte Coalition for Cyclist Safety, met fietsfabrikant Koninklijke Gazelle aan boord, spant zich in om de fietsveiligheid te verbeteren met behulp van sensortechnologie, ook wel vehicle-to-everything-technologie (v2x) genoemd. De auto-industrie geldt als lichtend voorbeeld;...
Ambtenaar mag onder voorwaarden oefenen met gen-ai [Computable]
Het lukt het kabinet niet meer dit jaar nog met een totale visie op generatieve ai (gen-ai) te komen. De Tweede Kamer kan zo’n integraal beeld van de impact die deze technologie heeft op onze maatschappij...
Softwareleverancier Topdesk ontvangt groeigeld [Computable]
Topdesk uit Delft krijgt een kapitaalinjectie van tweehonderd miljoen euro voor groei en verdere ontwikkeling. CVC Capital Partners dat een minderheidsbelang neemt, gaat de leverancier van software voor servicemanagement meer slagkracht bieden.
Vier miljoen voor stimulering datacenter-onderwijs EU [Computable]
De Europese Commissie (EC) heeft een subsidie van vier miljoen euro toegekend aan het project Colleges for European Datacenter Education (Cedce). Doel hiervan is het aanbieden van kwalitatief hoogwaardig onderwijs gericht op datacenters. Het project start...
Startup Nedscaper haalt Fox-IT-oprichter aan boord [Computable]
Mede-oprichter van ict-beveiliger Fox-IT, Menno van der Marel, wordt strategisch directeur van Nedscaper. Die Nederlands/Zuid-Afrikaanse startup levert securitydiensten voor Microsoft-omgevingen. Van der Marel steekt ook 2,2 miljoen euro in het bedrijf.
PQR-ceo Marijke Kasius schuift door naar Bechtle [Computable]
Bechtle benoemt per 1 januari Marijke Kasius tot landendirecteur voor de bedrijven van de groep in Nederland. De 39-jarige Kasius geeft momenteel samen met Marco Lesmeister leiding aan it-dienstverlener PQR. Die positie wordt ingenomen door Marc...
Oud-IBM- en Ajax-directeur Frank Kales overleden [Computable]
Frank Kales is op 8 december jongstleden overleden op 81-jarige leeftijd. Hij was onder voetbalkenners bekend als algemeen directeur van voetbalclub Ajax in de turbulente periode 1999-2000. Daarvoor werkte hij decennialang bij IBM waar hij uiteindelijk...
EU AI Act jaagt softwarebedrijven op kosten [Computable]
De komst van de uitgebreide en soms ook diepgaande artificiële intelligentie (ai)-regelgeving waartoe EU-onderhandelaars afgelopen nacht overeenstemming hebben bereikt, zal niet zonder financiële gevolgen blijven voor ondernemers. 'We hebben een ai-deal. Maar wel een dure,' zegt...
Historisch ai-akkoord EU legt ChatGPT aan banden [Computable]
De EU AI Act krijgt regels voor de ‘foundation models’ die aan de basis liggen van de enorme vooruitgang op gebied van ai. De Europese Commissie is het afgelopen nacht hierover eens geworden met het Europees...
Eset levert dns-filtering aan KPN-klanten [Computable]
Ict-beveiliger Eset levert domain name system (dns)-filtering aan telecombedrijf KPN. Met deze dienst zouden thuisnetwerken van KPN-klanten beter worden beschermd tegen malware, phishing en ongewenste inhoud.
Overheden werken nog niet goed met Woo [Computable]
Overheidsorganisaties passen de nieuwe Wet open overheid (Woo) vaak nog niet effectief toe, voornamelijk door beperkte capaciteit en een gebrek aan prioriteit. Ambtenaren voelen zich bovendien beperkt in hun vrijheid om advies te geven. Dit blijkt...
West-Brabantse scholen helpen mkb via hackathon [Computable]
Studenten van de West-Brabantse onderwijsinstellingen Avans, BUas en Curio gaan ondernemers ondersteunen bij hun digitale ontwikkeling. In de zogeheten Digiwerkplaats Mkb vindt deze vrijdag een hackathon plaats, waarbij twintig Avans-studenten in groepjes een duurzaamheidsdashboard voor drie...
CWI organiseert Cobol-event voor meer urgentie [Computable]
Het Centrum Wiskunde & Informatica (CWI) organiseert 18 januari een evenement over de toekomst van Cobol en mainframes. Voor deze strategische Cobol-dag werkt het centrum samen met Quuks en Software Improvement Group (SIG). Volgens de organisatie...
Plan voor cloud-restricties splijt EU [Computable]
Een groot front vormt zich tegen de plannen van de Europese Commissie voor soevereiniteit-vereisten die vooral Franse cloudbedrijven bevoordelen. Nederland heeft zich in zijn verzet inmiddels verzekerd van de steun van dertien andere EU-lidstaten, waaronder Duitsland....
Unilever kiest weer voor warehousesysteem SAP [Computable]
Wegens de verdubbeling van de productiecapaciteit van de fabriek in het Hongaarse Nyirbator moest Unilever een nieuw plaatselijk, groter magazijn in gebruik nemen. Mét een nieuw warehousemanagementsysteem (wms). De keuze van het levensmiddelenconcern viel wederom op...
Lyvia Group neemt Facility Kwadraat over [Computable]
De Zweedse Lyvia Group pleegt zijn eerste overname in Nederland: Facility Kwadraat. Dit bedrijf uit Den Bosch levert software-as-a-service (saas) voor facility management, meerjarenonderhoud, huurbeheer en vastgoedbeheer.
Adoptie van generatieve ai verloopt traag [Computable]
Ondanks de grote belangstelling maakt een meerderheid van de grote ondernemingen nog geen gebruik van generatieve ai (gen-ai) zoals ChatGPT. Vooral de infrastructuur vormt een barrière bij de implementatie van de grote taalmodellen (llm's) die aan...
ASM steekt 300 miljoen in Amerikaanse expansie [Computable]
ASM, de toeleverancier van de chipindustrie die tot voor kort ASM International heette, gaat de komende vijf jaar driehonderd miljoen dollar investeren in de uitbreiding van zijn Amerikaanse operaties. De vestiging in Arizona wordt flink uitgebreid.
Google met Gemini heel dicht bij OpenAI [Computable]
Met de lancering van Gemini, het grootste en meest ingenieuze artificiële intelligentie (ai)-taalmodel van Google, doet het techbedrijf een aanval op de leidende positie van OpenAI’s GPT-4. Volgens ai-experts is het verschil tussen beide grote taalmodellen...
Hack Booking.com stelt reissector voor uitdaging [Computable]
De recente hack gericht op Booking.com zegt alles over de impact van cybercriminaliteit op de hotel- en reissector. Bij de oplichting werden de gegevens van klanten gestolen en te koop aangeboden op het darkweb. Hierbij werden...
Van Oord brengt klimaatrisico's in kaart [Computable]
Van Oord heeft een opensourcetool ontwikkeld die inzicht moet geven in de klimaatverandering en risico’s die daarmee gepaard gaan. Het bagger- en waterbouwbedrijf wil met die software die meerdere datalagen combineert, wereldwijd kustgebieden en ecosystemen in...
Django Authentication Video Tutorial [Simple is Better Than Complex]
In this tutorial series, we are going to explore Django’s authentication system by implementing sign up, login, logout, password change, password reset and protected views from non-authenticated users. This tutorial is organized in 8 videos, one for each topic, ranging from 4 min to 15 min each.
Starting a Django project from scratch, creating a virtual environment and an initial Django app. After that, we are going to setup the templates and create an initial view to start working on the authentication.
If you are already familiar with Django, you can skip this video and jump to the Sign Up tutorial below.
First thing we are going to do is implement a sign up view using the built-in UserCreationForm
. In this video you
are also going to get some insights on basic Django form processing.
In this video tutorial we are going to first include the built-in Django auth URLs to our project and proceed to implement the login view.
In this tutorial we are going to include Django logout and also start playing with conditional templates, displaying different content depending if the user is authenticated or not.
Next The password change is a view where an authenticated user can change their password.
This tutorial is perhaps the most complicated one, because it involves several views and also sending emails. In this video tutorial you are going to learn how to use the default implementation of the password reset process and how to change the email messages.
After implementing the whole authentication system, this video gives you an overview on how to protect some views from
non authenticated users by using the @login_required
decorator and also using class-based views mixins.
Extra video showing how to integrate Django with Bootstrap 4 and how to use Django Crispy Forms to render Bootstrap forms properly. This video also include some general advices and tips about using Bootstrap 4.
If you want to learn more about Django authentication and some extra stuff related to it, like how to use Bootstrap to make your auth forms look good, or how to write unit tests for your auth-related views, you can read the forth part of my beginners guide to Django: A Complete Beginner’s Guide to Django - Part 4 - Authentication.
Of course the official documentation is the best source of information: Using the Django authentication system
The code used in this tutorial: github.com/sibtc/django-auth-tutorial-example
This was my first time recording this kind of content, so your feedback is highly appreciated. Please let me know what you think!
And don’t forget to subscribe to my YouTube channel! I will post exclusive Django tutorials there. So stay tuned! :-)
What You Should Know About The Django User Model [Simple is Better Than Complex]
The goal of this article is to discuss the caveats of the default Django user model implementation and also to give you some advice on how to address them. It is important to know the limitations of the current implementation so to avoid the most common pitfalls.
Something to keep in mind is that the Django user model is heavily based on its initial implementation that is at least 16 years old. Because user and authentication is a core part of the majority of the web applications using Django, most of its quirks persisted on the subsequent releases so to maintain backward compatibility.
The good news is that Django offers many ways to override and customize its default implementation so to fit your application needs. But some of those changes must be done right at the beginning of the project, otherwise it would be too much of a hassle to change the database structure after your application is in production.
Below, the topics that we are going to cover in this article:
First, let’s explore the caveats and next we discuss the options.
Even though the username
field is marked as unique, by default it is not case-sensitive. That means the username
john.doe
and John.doe
identifies two different users in your application.
This can be a security issue if your application has social aspects that builds around the username
providing a
public URL to a profile like Twitter, Instagram or GitHub for example.
It also delivers a poor user experience because people doesn’t expect that john.doe
is a different username than
John.Doe
, and if the user does not type the username exactly in the same way when they created their account, they
might be unable to log in to your application.
Possible Solutions:
CharField
with the CICharField
instead (which is case-insensitive)get_by_natural_key
from the UserManager
to query the database using iexact
ModelBackend
implementationThis is not necessarily an issue, but it is important for you to understand what that means and what are the effects.
By default the username field accepts letters, numbers and the characters: @
, .
, +
, -
, and _
.
The catch here is on which letters it accepts.
For example, joão
would be a valid username. Similarly, Джон
or 約翰
would also be a valid username.
Django ships with two username validators: ASCIIUsernameValidator
and UnicodeUsernameValidator
. If the intended
behavior is to only accept letters from A-Z, you may want to switch the username validator to use ASCII letters only
by using the ASCIIUsernameValidator
.
Possible Solutions:
ASCIIUsernameValidator
Multiple users can have the same email address associated with their account.
By default the email is used to recover a password. If there is more than one user with the same email address, the password reset will be initiated for all accounts and the user will receive an email for each active account.
It also may not be an issue but this will certainly make it impossible to offer the option to authenticate the user using the email address (like those sites that allow you to login with username or email address).
Possible Solutions:
AbstractBaseUser
to define the email field from scratchBy default the email field does not allow null
, however it allow blank
values, so it pretty much allows users to
not inform a email address.
Also, this may not be an issue for your application. But if you intend to allow users to log in with email it may be a good idea to enforce the registration of this field.
When using the built-in resources like user creation forms or when using model forms you need to pay attention to this detail if the desired behavior is to always have the user email.
Possible Solutions:
AbstractBaseUser
to define the email field from scratchThere is a small catch on the user creation process that if the set_password
method is called passing None
as a
parameter, it will produce an unusable password. And that also means that the user will be unable to start a password
reset to set the first password.
You can end up in that situation if you are using social networks like Facebook or Twitter to allow the user to create an account on your website.
Another way of ending up in this situation is simply by creating a user using the User.objects.create_user()
or
User.objects.create_superuser()
without providing an initial password.
Possible Solutions:
Changing the user model is something you want to do early on. After your database schema is generated and your database is populated it will be very tricky to swap the user model.
The reason why is that you are likely going to have some foreign key created referencing the user table, also Django internal tables will create hard references to the user table. And if you plan to change that later on you will need to change and migrate the database by yourself.
Possible Solutions:
AbstractUser
and change a single configuration on the
settings module. This will give you a tremendous freedom and it will make things way easier in the future should the
requirements change.To address the limitations we discussed in this article we have two options: (1) implement workarounds to fix the behavior of the default user model; (2) replace the default user model altogether and fix the issues for good.
What is going to dictate what approach you need to use is in what stage your project currently is.
django.contrib.auth.models.User
, go
with the first solution implementing the workarounds;First let’s have a look on a few workarounds that you can implement if you project is already in production. Keep in
mind that those solutions assume that you don’t have direct access to the User model, that is, you are currently using
the default User model importing it from django.contrib.auth.models
.
If you did replace the User model, then jump to the next section to get better tips on how to fix the issues.
Before making any changes you need to make sure you don’t have conflicting usernames on your database. For example,
if you have a User with the username maria
and another with the username Maria
you have to plan a data migration
first. It is difficult to tell you what to do because it really depends on how you want to handle it. One option is
to append some digits after the username, but that can disturb the user experience.
Now let’s say you checked your database and there are no conflicting usernames and you are good to go.
First thing you need to do is to protect your sign up forms to not allow conflicting usernames to create accounts.
Then on your user creation form, used to sign up, you could validate the username like this:
def clean_username(self):
username = self.cleaned_data.get("username")
if User.objects.filter(username__iexact=username).exists():
self.add_error("username", "A user with this username already exists.")
return username
If you are handling user creation in a rest API using DRF, you can do something similar in your serializer:
def validate_username(self, value):
if User.objects.filter(username__iexact=value).exists():
raise serializers.ValidationError("A user with this username already exists.")
return value
In the previous example the mentioned ValidationError
is the one defined in the DRF.
The iexact
notation on the queryset parameter will query the database ignoring the case.
Now that the user creation is sanitized we can proceed to define a custom authentication backend.
Create a module named backends.py anywhere in your project and add the following snippet:
backends.py
from django.contrib.auth import get_user_model
from django.contrib.auth.backends import ModelBackend
class CaseInsensitiveModelBackend(ModelBackend):
def authenticate(self, request, username=None, password=None, **kwargs):
UserModel = get_user_model()
if username is None:
username = kwargs.get(UserModel.USERNAME_FIELD)
try:
case_insensitive_username_field = '{}__iexact'.format(UserModel.USERNAME_FIELD)
user = UserModel._default_manager.get(**{case_insensitive_username_field: username})
except UserModel.DoesNotExist:
# Run the default password hasher once to reduce the timing
# difference between an existing and a non-existing user (#20760).
UserModel().set_password(password)
else:
if user.check_password(password) and self.user_can_authenticate(user):
return user
Now switch the authentication backend in the settings.py module:
settings.py
AUTHENTICATION_BACKENDS = ('mysite.core.backends.CaseInsensitiveModelBackend', )
Please note that 'mysite.core.backends.CaseInsensitiveModelBackend'
must be changed to the valid path, where you
created the backends.py module.
It is important to have handled all conflicting users before changing the authentication backend because otherwise it
could raise a 500 exception MultipleObjectsReturned
.
Here we can borrow the built-in UsernameField
and customize it to append the ASCIIUsernameValidator
to the list of
validators:
from django.contrib.auth.forms import UsernameField
from django.contrib.auth.validators import ASCIIUsernameValidator
class ASCIIUsernameField(UsernameField):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.validators.append(ASCIIUsernameValidator())
Then on the Meta
of your User creation form you can replace the form field class:
class UserCreationForm(forms.ModelForm):
# field definitions...
class Meta:
model = User
fields = ("username",)
field_classes = {'username': ASCIIUsernameField}
Here all you can do is to sanitize and handle the user input in all views where you user can modify its email address.
You have to include the email field on your sign up form/serializer as well.
Then just make it mandatory like this:
class UserCreationForm(forms.ModelForm):
email = forms.EmailField(required=True)
# other field definitions...
class Meta:
model = User
fields = ("username",)
field_classes = {'username': ASCIIUsernameField}
def clean_email(self):
email = self.cleaned_data.get("email")
if User.objects.filter(email__iexact=email).exists():
self.add_error("email", _("A user with this email already exists."))
return email
You can also check a complete and detailed example of this form on the project shared together with this post: userworkarounds
Now I’m going to show you how I usually like to extend and replace the default User model. It is a little bit verbose but that is the strategy that will allow you to access all the inner parts of the User model and make it better.
To replace the User model you have two options: extending the AbstractBaseUser
or extending the AbstractUser
.
To illustrate what that means I draw the following diagram of how the default Django model is implemented:
The green circle identified with the label User
is actually the one you import from django.contrib.auth.models
and
that is the implementation that we discussed in this article.
If you look at the source code, its implementation looks like this:
class User(AbstractUser):
class Meta(AbstractUser.Meta):
swappable = 'AUTH_USER_MODEL'
So basically it is just an implementation of the AbstractUser
. Meaning all the fields and logic are implemented in the
abstract class.
It is done that way so we can easily extend the User
model by creating a sub-class of the AbstractUser
and add other
features and fields you like.
But there is a limitation that you can’t override an existing model field. For example, you can re-define the email field to make it mandatory or to change its length.
So extending the AbstractUser
class is only useful when you want to modify its methods, add more fields or swap the
objects
manager.
If you want to remove a field or change how the field is defined, you have to extend the user model from the
AbstractBaseUser
.
The best strategy to have full control over the user model is creating a new concrete class from the PermissionsMixin
and the AbstractBaseUser
.
Note that the PermissionsMixin
is only necessary if you intend to use the Django admin or the built-in permissions
framework. If you are not planning to use it you can leave it out. And in the future if things change you can add
the mixin and migrate the model and you are ready to go.
So the implementation strategy looks like this:
Now I’m going to show you my go-to implementation. I always use PostgreSQL which, in my opinion, is the best database
to use with Django. At least it is the one with most support and features anyway. So I’m going to show an approach
that use the PostgreSQL’s CITextExtension
. Then I will show some options if you are using other database engines.
For this implementation I always create an app named accounts
:
django-admin startapp accounts
Then before adding any code I like to create an empty migration to install the PostgreSQL extensions that we are going to use:
python manage.py makemigrations accounts --empty --name="postgres_extensions"
Inside the migrations
directory of the accounts
app you will find an empty migration called
0001_postgres_extensions.py
.
Modify the file to include the extension installation:
migrations/0001_postgres_extensions.py
from django.contrib.postgres.operations import CITextExtension
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
]
operations = [
CITextExtension()
]
Now let’s implement our model. Open the models.py
file inside the accounts
app.
I always grab the initial code directly from Django’s source on GitHub, copying the AbstractUser
implementation, and
modify it accordingly:
accounts/models.py
from django.contrib.auth.base_user import AbstractBaseUser
from django.contrib.auth.models import PermissionsMixin, UserManager
from django.contrib.auth.validators import ASCIIUsernameValidator
from django.contrib.postgres.fields import CICharField, CIEmailField
from django.core.mail import send_mail
from django.db import models
from django.utils import timezone
from django.utils.translation import gettext_lazy as _
class CustomUser(AbstractBaseUser, PermissionsMixin):
username_validator = ASCIIUsernameValidator()
username = CICharField(
_("username"),
max_length=150,
unique=True,
help_text=_("Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only."),
validators=[username_validator],
error_messages={
"unique": _("A user with that username already exists."),
},
)
first_name = models.CharField(_("first name"), max_length=150, blank=True)
last_name = models.CharField(_("last name"), max_length=150, blank=True)
email = CIEmailField(
_("email address"),
unique=True,
error_messages={
"unique": _("A user with that email address already exists."),
},
)
is_staff = models.BooleanField(
_("staff status"),
default=False,
help_text=_("Designates whether the user can log into this admin site."),
)
is_active = models.BooleanField(
_("active"),
default=True,
help_text=_(
"Designates whether this user should be treated as active. Unselect this instead of deleting accounts."
),
)
date_joined = models.DateTimeField(_("date joined"), default=timezone.now)
objects = UserManager()
EMAIL_FIELD = "email"
USERNAME_FIELD = "username"
REQUIRED_FIELDS = ["email"]
class Meta:
verbose_name = _("user")
verbose_name_plural = _("users")
def clean(self):
super().clean()
self.email = self.__class__.objects.normalize_email(self.email)
def get_full_name(self):
"""
Return the first_name plus the last_name, with a space in between.
"""
full_name = "%s %s" % (self.first_name, self.last_name)
return full_name.strip()
def get_short_name(self):
"""Return the short name for the user."""
return self.first_name
def email_user(self, subject, message, from_email=None, **kwargs):
"""Send an email to this user."""
send_mail(subject, message, from_email, [self.email], **kwargs)
Let’s review what we changed here:
username_validator
to use ASCIIUsernameValidator
username
field now is using CICharField
which is not case-sensitiveemail
field is now mandatory, unique and is using CIEmailField
which is not case-sensitiveOn the settings module, add the following configuration:
settings.py
AUTH_USER_MODEL = "accounts.CustomUser"
Now we are ready to create our migrations:
python manage.py makemigrations
Apply the migrations:
python manage.py migrate
And you should get a similar result if you are just creating your project and if there is no other models/apps:
Operations to perform:
Apply all migrations: accounts, admin, auth, contenttypes, sessions
Running migrations:
Applying contenttypes.0001_initial... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0001_initial... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying auth.0007_alter_validators_add_error_messages... OK
Applying auth.0008_alter_user_username_max_length... OK
Applying auth.0009_alter_user_last_name_max_length... OK
If you check your database scheme you will see that there is no auth_user
table (which is the default one), and now
the user is stored on the table accounts_customuser
:
And all the Foreign Keys to the user model will be created pointing to this table. That’s why it is important to do it right in the beginning of your project, before you created the database scheme.
Now you have all the freedom. You can replace the first_name
and last_name
and use just one field called name
.
You could remove the username
field and identify your User model with the email
(then just make sure you change
the property USERNAME_FIELD
to email
).
You can grab the source code on GitHub: customuser
If you are not using PostgreSQL and want to implement case-insensitive authentication and you have direct access to the User model, a nice hack is to create a custom manager for the User model, like this:
accounts/models.py
from django.contrib.auth.models import AbstractUser, UserManager
class CustomUserManager(UserManager):
def get_by_natural_key(self, username):
case_insensitive_username_field = '{}__iexact'.format(self.model.USERNAME_FIELD)
return self.get(**{case_insensitive_username_field: username})
class CustomUser(AbstractBaseUser, PermissionsMixin):
# all the fields, etc...
objects = CustomUserManager()
# meta, methods, etc...
Then you could also sanitize the username field on the clean()
method to always save it as lowercase so you don’t have
to bother having case variant/conflicting usernames:
def clean(self):
super().clean()
self.email = self.__class__.objects.normalize_email(self.email)
self.username = self.username.lower()
In this tutorial we discussed a few caveats of the default User model implementation and presented a few options to address those issues.
The takeaway message here is: always replace the default User model.
If your project is already in production, don’t panic: there are ways to fix those issues following the recommendations in this post.
I also have two detailed blog posts on how to make the username field case-insensitive and other about how to extend the django user model:
You can also explore the source code presented in this post on GitHub:
How to Start a Production-Ready Django Project [Simple is Better Than Complex]
In this tutorial I’m going to show you how I usually start and organize a new Django project nowadays. I’ve tried many different configurations and ways to organize the project, but for the past 4 years or so this has been consistently my go-to setup.
Please note that this is not intended to be a “best practice” guide or to fit every use case. It’s just the way I like to use Django and that’s also the way that I found that allow your project to grow in healthy way.
Index
Usually those are the premises I take into account when setting up a project:
Usually I work with three environment dimensions in my code: local, tests and production. I like to see it
as a “mode” how I run the project. What dictates which mode I’m running the project is which settings.py
I’m currently
using.
The local dimension always come first. It is the settings and setup that a developer will use on their local machine.
All the defaults and configurations must be done to attend the local development environment first.
The reason why I like to do it that way is that the project must be as simple as possible for a new hire to clone the repository, run the project and start coding.
The production environment usually will be configured and maintained by experienced developers and by those who are more familiar with the code base itself. And because the deployment should be automated, there is no reason for people being re-creating the production server over and over again. So it is perfectly fine for the production setup require a few extra steps and configuration.
The tests environment will be also available locally, so developers can test the code and run the static checks.
But the idea of the tests environment is to expose it to a CI environment like Travis CI, Circle CI, AWS Code Pipeline, etc.
It is a simple setup that you can install the project and run all the unit tests.
The production dimension is the real deal. This is the environment that goes live without the testing and debugging utilities.
I also use this “mode” or dimension to run the staging server.
A staging server is where you roll out new features and bug fixes before applying to the production server.
The idea here is that your staging server should run in production mode, and the only difference is going to be your static/media server and database server. And this can be achieved just by changing the configuration to tell what is the database connection string for example.
But the main thing is that you should not have any conditional in your code that checks if it is the production or staging server. The project should run exactly in the same way as in production.
Right from the beginning it is a good idea to setup a remote version control service. My go-to option is Git on GitHub. Usually I create the remote repository first then clone it on my local machine to get started.
Let’s say our project is called simple
, after creating the repository on GitHub I will create a directory named
simple
on my local machine, then within the simple
directory I will clone the repository, like shown on the
structure below:
simple/
└── simple/ (git repo)
Then I create the virtualenv
outside of the Git repository:
simple/
├── simple/
└── venv/
Then alongside the simple
and venv
directories I may place some other support files related to the project which I
do not plan to commit to the Git repository.
The reason I do that is because it is more convenient to destroy and re-create/re-clone both the virtual environment or the repository itself.
It is also good to store your virtual environment outside of the git repository/project root so you don’t need to bother ignoring its path when using libs like flake8, isort, black, tox, etc.
You can also use tools like virtualenvwrapper
to manage your virtual environments, but I prefer doing it that way
because everything is in one place. And if I no longer need to keep a given project on my local machine, I can delete
it completely without leaving behind anything related to the project on my machine.
The next step is installing Django inside the virtualenv so we can use the django-admin
commands.
source venv/bin/activate
pip install django
Inside the simple
directory (where the git repository was cloned) start a new project:
django-admin startproject simple .
Attention to the .
in the end of the command. It is necessary to not create yet another directory called simple
.
So now the structure should be something like this:
simple/ <- (1) Wrapper directory with all project contents including the venv
├── simple/ <- (2) Project root and git repository
│ ├── .git/
│ ├── manage.py
│ └── simple/ <- (3) Project package, apps, templates, static, etc
│ ├── __init__.py
│ ├── asgi.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── venv/
At this point I already complement the project package directory with three extra directories for templates
, static
and locale
.
Both templates
and static
we are going to manage at a project-level and app-level. Those are refer to the global
templates and static files.
The locale
is necessary in case you are using i18n
to translate your application to other languages. So here
is where you are going to store the .mo
and .po
files.
So the structure now should be something like this:
simple/
├── simple/
│ ├── .git/
│ ├── manage.py
│ └── simple/
│ ├── locale/
│ ├── static/
│ ├── templates/
│ ├── __init__.py
│ ├── asgi.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── venv/
Inside the project root (2) I like to create a directory called requirements
with all the .txt
files, breaking down
the project dependencies like this:
base.txt
: Main dependencies, strictly necessary to make the project run. Common to all environmentstests.txt
: Inherits from base.txt
+ test utilitieslocal.txt
: Inherits from tests.txt
+ development utilitiesproduction.txt
: Inherits from base.txt
+ production only dependenciesNote that I do not have a staging.txt
requirements file, that’s because the staging environment is going to use the
production.txt
requirements so we have an exact copy of the production environment.
simple/
├── simple/
│ ├── .git/
│ ├── manage.py
│ ├── requirements/
│ │ ├── base.txt
│ │ ├── local.txt
│ │ ├── production.txt
│ │ └── tests.txt
│ └── simple/
│ ├── locale/
│ ├── static/
│ ├── templates/
│ ├── __init__.py
│ ├── asgi.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── venv/
Now let’s have a look inside each of those requirements file and what are the python libraries that I always use no matter what type of Django project I’m developing.
base.txt
dj-database-url==0.5.0
Django==3.2.4
psycopg2-binary==2.9.1
python-decouple==3.4
pytz==2021.1
.env
files in a safe waysettings.py
module. It also helps with decoupling configuration from source codetests.txt
-r base.txt
black==21.6b0
coverage==5.5
factory-boy==3.2.0
flake8==3.9.2
isort==5.9.1
tox==3.23.1
The -r base.txt
inherits all the requirements defined in the base.txt
file
local.txt
-r tests.txt
django-debug-toolbar==3.2.1
ipython==7.25.0
The -r tests.txt
inherits all the requirements defined in the base.txt
and tests.txt
file
production.txt
-r base.txt
gunicorn==20.1.0
sentry-sdk==1.1.0
The -r base.txt
inherits all the requirements defined in the base.txt
file
Also following the environments and modes premise I like to setup multiple settings modules. Those are going to serve as the entry point to determine in which mode I’m running the project.
Inside the simple
project package, I create a new directory called settings
and break down the files like this:
simple/ (1)
├── simple/ (2)
│ ├── .git/
│ ├── manage.py
│ ├── requirements/
│ │ ├── base.txt
│ │ ├── local.txt
│ │ ├── production.txt
│ │ └── tests.txt
│ └── simple/ (3)
│ ├── locale/
│ ├── settings/
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── local.py
│ │ ├── production.py
│ │ └── tests.py
│ ├── static/
│ ├── templates/
│ ├── __init__.py
│ ├── asgi.py
│ ├── urls.py
│ └── wsgi.py
└── venv/
Note that I removed the settings.py
that used to live inside the simple/ (3)
directory.
The majority of the code will live inside the base.py
settings module.
Everything that we can set only once in the base.py
and change its value using python-decouple
we should keep in the
base.py
and never repeat/override in the other settings modules.
After the removal of the main settings.py
a nice touch is to modify the manage.py
file to set the
local.py
as the default settings module so we can still run commands like python manage.py runserver
without any
further parameters:
manage.py
#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
def main():
"""Run administrative tasks."""
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'simple.settings.local') # <- here!
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
Now let’s have a look on each of those settings modules.
base.py
from pathlib import Path
import dj_database_url
from decouple import Csv, config
BASE_DIR = Path(__file__).resolve().parent.parent
# ==============================================================================
# CORE SETTINGS
# ==============================================================================
SECRET_KEY = config("SECRET_KEY", default="django-insecure$simple.settings.local")
DEBUG = config("DEBUG", default=True, cast=bool)
ALLOWED_HOSTS = config("ALLOWED_HOSTS", default="127.0.0.1,localhost", cast=Csv())
INSTALLED_APPS = [
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
]
DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
ROOT_URLCONF = "simple.urls"
INTERNAL_IPS = ["127.0.0.1"]
WSGI_APPLICATION = "simple.wsgi.application"
# ==============================================================================
# MIDDLEWARE SETTINGS
# ==============================================================================
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
]
# ==============================================================================
# TEMPLATES SETTINGS
# ==============================================================================
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [BASE_DIR / "templates"],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
],
},
},
]
# ==============================================================================
# DATABASES SETTINGS
# ==============================================================================
DATABASES = {
"default": dj_database_url.config(
default=config("DATABASE_URL", default="postgres://simple:simple@localhost:5432/simple"),
conn_max_age=600,
)
}
# ==============================================================================
# AUTHENTICATION AND AUTHORIZATION SETTINGS
# ==============================================================================
AUTH_PASSWORD_VALIDATORS = [
{
"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
},
{
"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
},
{
"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
},
{
"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
},
]
# ==============================================================================
# I18N AND L10N SETTINGS
# ==============================================================================
LANGUAGE_CODE = config("LANGUAGE_CODE", default="en-us")
TIME_ZONE = config("TIME_ZONE", default="UTC")
USE_I18N = True
USE_L10N = True
USE_TZ = True
LOCALE_PATHS = [BASE_DIR / "locale"]
# ==============================================================================
# STATIC FILES SETTINGS
# ==============================================================================
STATIC_URL = "/static/"
STATIC_ROOT = BASE_DIR.parent.parent / "static"
STATICFILES_DIRS = [BASE_DIR / "static"]
STATICFILES_FINDERS = (
"django.contrib.staticfiles.finders.FileSystemFinder",
"django.contrib.staticfiles.finders.AppDirectoriesFinder",
)
# ==============================================================================
# MEDIA FILES SETTINGS
# ==============================================================================
MEDIA_URL = "/media/"
MEDIA_ROOT = BASE_DIR.parent.parent / "media"
# ==============================================================================
# THIRD-PARTY SETTINGS
# ==============================================================================
# ==============================================================================
# FIRST-PARTY SETTINGS
# ==============================================================================
SIMPLE_ENVIRONMENT = config("SIMPLE_ENVIRONMENT", default="local")
A few comments on the overall base settings file contents:
config()
are from the python-decouple
library. It is exposing the configuration to an environment variable and
retrieving its value accordingly to the expected data type. Read more about python-decouple
on this guide:
How to Use Python DecoupleSECRET_KEY
, DEBUG
and ALLOWED_HOSTS
defaults to local/development environment values.
That means a new developer won’t need to set a local .env
and provide some initial value to run locallydj_database_url
to translate this one line string to a Python
dictionary as Django expectsMEDIA_ROOT
we are navigating two directories up to create a media
directory outside the git
repository but inside our project workspace (inside the directory simple/ (1)
). So everything is handy and we won’t
be committing test uploads to our repositorybase.py
settings I reserve two blocks for third-party Django libraries that I may install, such
as Django Rest Framework or Django Crispy Forms. And the first-party settings refer to custom settings that I may create
exclusively for our project. Usually I will prefix them with the project name, like SIMPLE_XXX
local.py
# flake8: noqa
from .base import *
INSTALLED_APPS += ["debug_toolbar"]
MIDDLEWARE.insert(0, "debug_toolbar.middleware.DebugToolbarMiddleware")
# ==============================================================================
# EMAIL SETTINGS
# ==============================================================================
EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"
Here is where I will setup Django Debug Toolbar for example. Or set the email backend to display the sent emails on console instead of having to setup a valid email server to work on the project.
All the code that is only relevant for the development process goes here.
You can use it to setup other libs like Django Silk to run profiling without exposing it to production.
tests.py
# flake8: noqa
from .base import *
PASSWORD_HASHERS = ["django.contrib.auth.hashers.MD5PasswordHasher"]
class DisableMigrations:
def __contains__(self, item):
return True
def __getitem__(self, item):
return None
MIGRATION_MODULES = DisableMigrations()
Here I add configurations that help us run the test cases faster. Sometimes disabling the migrations may not work if you have interdependencies between the apps models so Django may fail to create a database without the migrations.
In some projects it is better to keep the test database after the execution.
production.py
# flake8: noqa
import sentry_sdk
from sentry_sdk.integrations.django import DjangoIntegration
import simple
from .base import *
# ==============================================================================
# SECURITY SETTINGS
# ==============================================================================
CSRF_COOKIE_SECURE = True
CSRF_COOKIE_HTTPONLY = True
SECURE_HSTS_SECONDS = 60 * 60 * 24 * 7 * 52 # one year
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_SSL_REDIRECT = True
SECURE_BROWSER_XSS_FILTER = True
SECURE_CONTENT_TYPE_NOSNIFF = True
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
SESSION_COOKIE_SECURE = True
# ==============================================================================
# THIRD-PARTY APPS SETTINGS
# ==============================================================================
sentry_sdk.init(
dsn=config("SENTRY_DSN", default=""),
environment=SIMPLE_ENVIRONMENT,
release="simple@%s" % simple.__version__,
integrations=[DjangoIntegration()],
)
The most important part here on the production settings is to enable all the security settings Django offer. I like to do it that way because you can’t run the development server with most of those configurations on.
The other thing is the Sentry configuration.
Note the simple.__version__
on the release. Next we are going to explore how I usually manage the version of the
project.
I like to reuse Django’s get_version
utility for a simple and PEP 440 complaint version identification.
Inside the project’s __init__.py
module:
simple/
├── simple/
│ ├── .git/
│ ├── manage.py
│ ├── requirements/
│ └── simple/
│ ├── locale/
│ ├── settings/
│ ├── static/
│ ├── templates/
│ ├── __init__.py <-- here!
│ ├── asgi.py
│ ├── urls.py
│ └── wsgi.py
└── venv/
You can do something like this:
from django import get_version
VERSION = (1, 0, 0, "final", 0)
__version__ = get_version(VERSION)
The only down side of using the get_version
directly from the Django module is that it won’t be able to resolve the
git hash for alpha versions.
A possible solution is making a copy of the django/utils/version.py
file to your project, and then you import it
locally, so it will be able to identify your git repository within the project folder.
But it also depends what kind of versioning you are using for your project. If the version of your project is not really relevant to the end user and you want to keep track of it for internal management like to identify the release on a Sentry issue, you could use a date-based release versioning.
A Django app is a Python package that you “install” using the INSTALLED_APPS
in your settings file. An app can live pretty
much anywhere: inside or outside the project package or even in a library that you installed using pip
.
Indeed, your Django apps may be reusable on other projects. But that doesn’t mean it should. Don’t let it destroy your project design or don’t get obsessed over it. Also, it shouldn’t necessarily represent a “part” of your website/web application.
It is perfectly fine for some apps to not have models, or other apps have only views. Some of your modules doesn’t even need to be a Django app at all. I like to see my Django projects as a big Python package and organize it in a way that makes sense, and not try to place everything inside reusable apps.
The general recommendation of the official Django documentation is to place your apps in the project root (alongside
the manage.py file, identified here in this tutorial by the simple/ (2)
folder).
But actually I prefer to create my apps inside the project package (identified in this tutorial by the simple/ (3)
folder). I create a module named apps
and then inside the apps
I create my Django apps. The main reason why is that
it creates a nice namespace for the app. It helps you easily identify that a particular import is part of your
project. Also this namespace helps when creating logging rules to handle events in a different way.
Here is an example of how I do it:
simple/ (1)
├── simple/ (2)
│ ├── .git/
│ ├── manage.py
│ ├── requirements/
│ └── simple/ (3)
│ ├── apps/ <-- here!
│ │ ├── __init__.py
│ │ ├── accounts/
│ │ └── core/
│ ├── locale/
│ ├── settings/
│ ├── static/
│ ├── templates/
│ ├── __init__.py
│ ├── asgi.py
│ ├── urls.py
│ └── wsgi.py
└── venv/
In the example above the folders accounts/
and core/
are Django apps created with the command django-admin startapp
.
Those two apps are also always in my project. The accounts
app is the one that I use the replace the default Django
User
model and also the place where I eventually create password reset, account activation, sign ups, etc.
The core
app I use for general/global implementations. For example to define a model that will be used across most
of the other apps. I try to keep it decoupled from other apps, not importing other apps resources. It usually is a good
place to implement general purpose or reusable views and mixins.
Something to pay attention when using this approach is that you need to change the name
of the apps configuration,
inside the apps.py
file of the Django app:
accounts/apps.py
from django.apps import AppConfig
class AccountsConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'accounts' # <- this is the default name created by the startapp command
You should rename it like this, to respect the namespace:
from django.apps import AppConfig
class AccountsConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'simple.apps.accounts' # <- change to this!
Then on your INSTALLED_APPS
you are going to create a reference to your models like this:
INSTALLED_APPS = [
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
"simple.apps.accounts",
"simple.apps.core",
]
The namespace also helps to organize your INSTALLED_APPS
making your project apps easily recognizable.
This is what my app structure looks like:
simple/ (1)
├── simple/ (2)
│ ├── .git/
│ ├── manage.py
│ ├── requirements/
│ └── simple/ (3)
│ ├── apps/
│ │ ├── accounts/ <- My app structure
│ │ │ ├── migrations/
│ │ │ │ └── __init__.py
│ │ │ ├── static/
│ │ │ │ └── accounts/
│ │ │ ├── templates/
│ │ │ │ └── accounts/
│ │ │ ├── tests/
│ │ │ │ ├── __init__.py
│ │ │ │ └── factories.py
│ │ │ ├── __init__.py
│ │ │ ├── admin.py
│ │ │ ├── apps.py
│ │ │ ├── constants.py
│ │ │ ├── models.py
│ │ │ └── views.py
│ │ ├── core/
│ │ └── __init__.py
│ ├── locale/
│ ├── settings/
│ ├── static/
│ ├── templates/
│ ├── __init__.py
│ ├── asgi.py
│ ├── urls.py
│ └── wsgi.py
└── venv/
The first thing I do is create a folder named tests
so I can break down my tests into several files. I always add a
factories.py
to create my model factories using the factory-boy
library.
For both static
and templates
always create first a directory with the same name as the app to avoid name collisions
when Django collect all static files and try to resolve the templates.
The admin.py
may be there or not depending if I’m using the Django Admin contrib app.
Other common modules that you may have is a utils.py
, forms.py
, managers.py
, services.py
etc.
Now I’m going to show you the configuration that I use for tools like isort
, black
, flake8
, coverage
and tox
.
The .editorconfig
file is a standard recognized by all major IDEs and code editors. It helps the editor understand
what is the file formatting rules used in the project.
It tells the editor if the project is indented with tabs or spaces. How many spaces/tabs. What’s the max length for a line of code.
I like to use Django’s .editorconfig
file. Here is what it looks like:
.editorconfig
# https://editorconfig.org/
root = true
[*]
indent_style = space
indent_size = 4
insert_final_newline = true
trim_trailing_whitespace = true
end_of_line = lf
charset = utf-8
# Docstrings and comments use max_line_length = 79
[*.py]
max_line_length = 119
# Use 2 spaces for the HTML files
[*.html]
indent_size = 2
# The JSON files contain newlines inconsistently
[*.json]
indent_size = 2
insert_final_newline = ignore
[**/admin/js/vendor/**]
indent_style = ignore
indent_size = ignore
# Minified JavaScript files shouldn't be changed
[**.min.js]
indent_style = ignore
insert_final_newline = ignore
# Makefiles always use tabs for indentation
[Makefile]
indent_style = tab
# Batch files use tabs for indentation
[*.bat]
indent_style = tab
[docs/**.txt]
max_line_length = 79
[*.yml]
indent_size = 2
Flake8 is a Python library that wraps PyFlakes, pycodestyle and Ned Batchelder’s McCabe script. It is a great toolkit for checking your code base against coding style (PEP8), programming errors (like “library imported but unused” and “Undefined name”) and to check cyclomatic complexity.
To learn more about flake8, check this tutorial I posted a while a go: How to Use Flake8.
setup.cfg
[flake8]
exclude = .git,.tox,*/migrations/*
max-line-length = 119
isort is a Python utility / library to sort imports alphabetically, and automatically separated into sections.
To learn more about isort, check this tutorial I posted a while a go: How to Use Python isort Library.
setup.cfg
[isort]
force_grid_wrap = 0
use_parentheses = true
combine_as_imports = true
include_trailing_comma = true
line_length = 119
multi_line_output = 3
skip = migrations
default_section = THIRDPARTY
known_first_party = simple
known_django = django
sections=FUTURE,STDLIB,DJANGO,THIRDPARTY,FIRSTPARTY,LOCALFOLDER
Pay attention to the known_first_party
, it should be the name of your project so isort can group your project’s
imports.
Black is a life changing library to auto-format your Python applications. There is no way I’m coding with Python nowadays without using Black.
Here is the basic configuration that I use:
pyproject.toml
[tool.black]
line-length = 119
target-version = ['py38']
include = '\.pyi?$'
exclude = '''
/(
\.eggs
| \.git
| \.hg
| \.mypy_cache
| \.tox
| \.venv
| _build
| buck-out
| build
| dist
| migrations
)/
'''
In this tutorial I described my go-to project setup when working with Django. That’s pretty much how I start all my projects nowadays.
Here is the final project structure for reference:
simple/
├── simple/
│ ├── .git/
│ ├── .gitignore
│ ├── .editorconfig
│ ├── manage.py
│ ├── pyproject.toml
│ ├── requirements/
│ │ ├── base.txt
│ │ ├── local.txt
│ │ ├── production.txt
│ │ └── tests.txt
│ ├── setup.cfg
│ └── simple/
│ ├── __init__.py
│ ├── apps/
│ │ ├── accounts/
│ │ │ ├── migrations/
│ │ │ │ └── __init__.py
│ │ │ ├── static/
│ │ │ │ └── accounts/
│ │ │ ├── templates/
│ │ │ │ └── accounts/
│ │ │ ├── tests/
│ │ │ │ ├── __init__.py
│ │ │ │ └── factories.py
│ │ │ ├── __init__.py
│ │ │ ├── admin.py
│ │ │ ├── apps.py
│ │ │ ├── constants.py
│ │ │ ├── models.py
│ │ │ └── views.py
│ │ ├── core/
│ │ │ ├── migrations/
│ │ │ │ └── __init__.py
│ │ │ ├── static/
│ │ │ │ └── core/
│ │ │ ├── templates/
│ │ │ │ └── core/
│ │ │ ├── tests/
│ │ │ │ ├── __init__.py
│ │ │ │ └── factories.py
│ │ │ ├── __init__.py
│ │ │ ├── admin.py
│ │ │ ├── apps.py
│ │ │ ├── constants.py
│ │ │ ├── models.py
│ │ │ └── views.py
│ │ └── __init__.py
│ ├── locale/
│ ├── settings/
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── local.py
│ │ ├── production.py
│ │ └── tests.py
│ ├── static/
│ ├── templates/
│ ├── asgi.py
│ ├── urls.py
│ └── wsgi.py
└── venv/
You can also explore the code on GitHub: django-production-template.
Zo installeer je Chrome OS op je (oude) computer [Laatste Artikelen - Webwereld]
Google timmert al jaren hard aan de weg met Chrome OS en brengt samen met verschillende computerfabrikanten Chrome-apparaten uit met dat besturingssysteem. Maar je hoeft niet per se een dedicated apparaat aan te schaffen, je kan het systeem ook zelf op je (oude) computer zetten en wij laten je zien hoe.
How to Use Chart.js with Django [Simple is Better Than Complex]
Chart.js is a cool open source JavaScript library that helps you render HTML5 charts. It is responsive and counts with 8 different chart types.
In this tutorial we are going to explore a little bit of how to make Django talk with Chart.js and render some simple charts based on data extracted from our models.
For this tutorial all you are going to do is add the Chart.js lib to your HTML page:
<script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>
You can download it from Chart.js official website and use it locally, or you can use it from a CDN using the URL above.
I’m going to use the same example I used for the tutorial How to Create Group By Queries With Django ORM which is a good complement to this tutorial because actually the tricky part of working with charts is to transform the data so it can fit in a bar chart / line chart / etc.
We are going to use the two models below, Country
and City
:
class Country(models.Model):
name = models.CharField(max_length=30)
class City(models.Model):
name = models.CharField(max_length=30)
country = models.ForeignKey(Country, on_delete=models.CASCADE)
population = models.PositiveIntegerField()
And the raw data stored in the database:
cities | |||
---|---|---|---|
id | name | country_id | population |
1 | Tokyo | 28 | 36,923,000 |
2 | Shanghai | 13 | 34,000,000 |
3 | Jakarta | 19 | 30,000,000 |
4 | Seoul | 21 | 25,514,000 |
5 | Guangzhou | 13 | 25,000,000 |
6 | Beijing | 13 | 24,900,000 |
7 | Karachi | 22 | 24,300,000 |
8 | Shenzhen | 13 | 23,300,000 |
9 | Delhi | 25 | 21,753,486 |
10 | Mexico City | 24 | 21,339,781 |
11 | Lagos | 9 | 21,000,000 |
12 | São Paulo | 1 | 20,935,204 |
13 | Mumbai | 25 | 20,748,395 |
14 | New York City | 20 | 20,092,883 |
15 | Osaka | 28 | 19,342,000 |
16 | Wuhan | 13 | 19,000,000 |
17 | Chengdu | 13 | 18,100,000 |
18 | Dhaka | 4 | 17,151,925 |
19 | Chongqing | 13 | 17,000,000 |
20 | Tianjin | 13 | 15,400,000 |
21 | Kolkata | 25 | 14,617,882 |
22 | Tehran | 11 | 14,595,904 |
23 | Istanbul | 2 | 14,377,018 |
24 | London | 26 | 14,031,830 |
25 | Hangzhou | 13 | 13,400,000 |
26 | Los Angeles | 20 | 13,262,220 |
27 | Buenos Aires | 8 | 13,074,000 |
28 | Xi'an | 13 | 12,900,000 |
29 | Paris | 6 | 12,405,426 |
30 | Changzhou | 13 | 12,400,000 |
31 | Shantou | 13 | 12,000,000 |
32 | Rio de Janeiro | 1 | 11,973,505 |
33 | Manila | 18 | 11,855,975 |
34 | Nanjing | 13 | 11,700,000 |
35 | Rhine-Ruhr | 16 | 11,470,000 |
36 | Jinan | 13 | 11,000,000 |
37 | Bangalore | 25 | 10,576,167 |
38 | Harbin | 13 | 10,500,000 |
39 | Lima | 7 | 9,886,647 |
40 | Zhengzhou | 13 | 9,700,000 |
41 | Qingdao | 13 | 9,600,000 |
42 | Chicago | 20 | 9,554,598 |
43 | Nagoya | 28 | 9,107,000 |
44 | Chennai | 25 | 8,917,749 |
45 | Bangkok | 15 | 8,305,218 |
46 | Bogotá | 27 | 7,878,783 |
47 | Hyderabad | 25 | 7,749,334 |
48 | Shenyang | 13 | 7,700,000 |
49 | Wenzhou | 13 | 7,600,000 |
50 | Nanchang | 13 | 7,400,000 |
51 | Hong Kong | 13 | 7,298,600 |
52 | Taipei | 29 | 7,045,488 |
53 | Dallas–Fort Worth | 20 | 6,954,330 |
54 | Santiago | 14 | 6,683,852 |
55 | Luanda | 23 | 6,542,944 |
56 | Houston | 20 | 6,490,180 |
57 | Madrid | 17 | 6,378,297 |
58 | Ahmedabad | 25 | 6,352,254 |
59 | Toronto | 5 | 6,055,724 |
60 | Philadelphia | 20 | 6,051,170 |
61 | Washington, D.C. | 20 | 6,033,737 |
62 | Miami | 20 | 5,929,819 |
63 | Belo Horizonte | 1 | 5,767,414 |
64 | Atlanta | 20 | 5,614,323 |
65 | Singapore | 12 | 5,535,000 |
66 | Barcelona | 17 | 5,445,616 |
67 | Munich | 16 | 5,203,738 |
68 | Stuttgart | 16 | 5,200,000 |
69 | Ankara | 2 | 5,150,072 |
70 | Hamburg | 16 | 5,100,000 |
71 | Pune | 25 | 5,049,968 |
72 | Berlin | 16 | 5,005,216 |
73 | Guadalajara | 24 | 4,796,050 |
74 | Boston | 20 | 4,732,161 |
75 | Sydney | 10 | 5,000,500 |
76 | San Francisco | 20 | 4,594,060 |
77 | Surat | 25 | 4,585,367 |
78 | Phoenix | 20 | 4,489,109 |
79 | Monterrey | 24 | 4,477,614 |
80 | Inland Empire | 20 | 4,441,890 |
81 | Rome | 3 | 4,321,244 |
82 | Detroit | 20 | 4,296,611 |
83 | Milan | 3 | 4,267,946 |
84 | Melbourne | 10 | 4,650,000 |
countries | |
---|---|
id | name |
1 | Brazil |
2 | Turkey |
3 | Italy |
4 | Bangladesh |
5 | Canada |
6 | France |
7 | Peru |
8 | Argentina |
9 | Nigeria |
10 | Australia |
11 | Iran |
12 | Singapore |
13 | China |
14 | Chile |
15 | Thailand |
16 | Germany |
17 | Spain |
18 | Philippines |
19 | Indonesia |
20 | United States |
21 | South Korea |
22 | Pakistan |
23 | Angola |
24 | Mexico |
25 | India |
26 | United Kingdom |
27 | Colombia |
28 | Japan |
29 | Taiwan |
For the first example we are only going to retrieve the top 5 most populous cities and render it as a pie chart. In this strategy we are going to return the chart data as part of the view context and inject the results in the JavaScript code using the Django Template language.
views.py
from django.shortcuts import render
from mysite.core.models import City
def pie_chart(request):
labels = []
data = []
queryset = City.objects.order_by('-population')[:5]
for city in queryset:
labels.append(city.name)
data.append(city.population)
return render(request, 'pie_chart.html', {
'labels': labels,
'data': data,
})
Basically in the view above we are iterating through the City
queryset and building a list of labels
and a list of
data
. Here in this case the data
is the population count saved in the City
model.
For the urls.py
just a simple routing:
urls.py
from django.urls import path
from mysite.core import views
urlpatterns = [
path('pie-chart/', views.pie_chart, name='pie-chart'),
]
Now the template. I got a basic snippet from the Chart.js Pie Chart Documentation.
pie_chart.html
{% extends 'base.html' %}
{% block content %}
<div id="container" style="width: 75%;">
<canvas id="pie-chart"></canvas>
</div>
<script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>
<script>
var config = {
type: 'pie',
data: {
datasets: [{
data: {{ data|safe }},
backgroundColor: [
'#696969', '#808080', '#A9A9A9', '#C0C0C0', '#D3D3D3'
],
label: 'Population'
}],
labels: {{ labels|safe }}
},
options: {
responsive: true
}
};
window.onload = function() {
var ctx = document.getElementById('pie-chart').getContext('2d');
window.myPie = new Chart(ctx, config);
};
</script>
{% endblock %}
In the example above the base.html
template is not important but you can see it in the code example I shared in the
end of this post.
This strategy is not ideal but works fine. The bad thing is that we are using the Django Template Language to interfere
with the JavaScript logic. When we put {{ data|safe}}
we are injecting a variable that came from
the server directly in the JavaScript code.
The code above looks like this:
As the title says, we are now going to render a bar chart using an async call.
views.py
from django.shortcuts import render
from django.db.models import Sum
from django.http import JsonResponse
from mysite.core.models import City
def home(request):
return render(request, 'home.html')
def population_chart(request):
labels = []
data = []
queryset = City.objects.values('country__name').annotate(country_population=Sum('population')).order_by('-country_population')
for entry in queryset:
labels.append(entry['country__name'])
data.append(entry['country_population'])
return JsonResponse(data={
'labels': labels,
'data': data,
})
So here we are using two views. The home
view would be the main page where the chart would be loaded at. The other
view population_chart
would be the one with the sole responsibility to aggregate the data the return a JSON response
with the labels and data.
If you are wondering about what this queryset is doing, it is grouping the cities by the country and aggregating the total population of each country. The result is going to be a list of country + total population. To learn more about this kind of query have a look on this post: How to Create Group By Queries With Django ORM
urls.py
from django.urls import path
from mysite.core import views
urlpatterns = [
path('', views.home, name='home'),
path('population-chart/', views.population_chart, name='population-chart'),
]
home.html
{% extends 'base.html' %}
{% block content %}
<div id="container" style="width: 75%;">
<canvas id="population-chart" data-url="{% url 'population-chart' %}"></canvas>
</div>
<script src="https://code.jquery.com/jquery-3.4.1.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>
<script>
$(function () {
var $populationChart = $("#population-chart");
$.ajax({
url: $populationChart.data("url"),
success: function (data) {
var ctx = $populationChart[0].getContext("2d");
new Chart(ctx, {
type: 'bar',
data: {
labels: data.labels,
datasets: [{
label: 'Population',
backgroundColor: 'blue',
data: data.data
}]
},
options: {
responsive: true,
legend: {
position: 'top',
},
title: {
display: true,
text: 'Population Bar Chart'
}
}
});
}
});
});
</script>
{% endblock %}
Now we have a better separation of concerns. Looking at the chart container:
<canvas id="population-chart" data-url="{% url 'population-chart' %}"></canvas>
We added a reference to the URL that holds the chart rendering logic. Later on we are using it to execute the Ajax call.
var $populationChart = $("#population-chart");
$.ajax({
url: $populationChart.data("url"),
success: function (data) {
// ...
}
});
Inside the success
callback we then finally execute the Chart.js related code using the JsonResponse
data.
I hope this tutorial helped you to get started with working with charts using Chart.js. I published another tutorial on the same subject a while ago but using the Highcharts library. The approach is pretty much the same: How to Integrate Highcharts.js with Django.
If you want to grab the code I used in this tutorial you can find it here: github.com/sibtc/django-chartjs-example.
How to Save Extra Data to a Django REST Framework Serializer [Simple is Better Than Complex]
In this tutorial you are going to learn how to pass extra data to your serializer, before saving it to the database.
When using regular Django forms, there is this common pattern where we save the form with commit=False
and then pass
some extra data to the instance before saving it to the database, like this:
form = InvoiceForm(request.POST)
if form.is_valid():
invoice = form.save(commit=False)
invoice.user = request.user
invoice.save()
This is very useful because we can save the required information using only one database query and it also make it possible to handle not nullable columns that was not defined in the form.
To simulate this pattern using a Django REST Framework serializer you can do something like this:
serializer = InvoiceSerializer(data=request.data)
if serializer.is_valid():
serializer.save(user=request.user)
You can also pass several parameters at once:
serializer = InvoiceSerializer(data=request.data)
if serializer.is_valid():
serializer.save(user=request.user, date=timezone.now(), status='sent')
In this example I created an app named core
.
models.py
from django.contrib.auth.models import User
from django.db import models
class Invoice(models.Model):
SENT = 1
PAID = 2
VOID = 3
STATUS_CHOICES = (
(SENT, 'sent'),
(PAID, 'paid'),
(VOID, 'void'),
)
user = models.ForeignKey(User, on_delete=models.CASCADE, related_name='invoices')
number = models.CharField(max_length=30)
date = models.DateTimeField(auto_now_add=True)
status = models.PositiveSmallIntegerField(choices=STATUS_CHOICES)
amount = models.DecimalField(max_digits=10, decimal_places=2)
serializers.py
from rest_framework import serializers
from core.models import Invoice
class InvoiceSerializer(serializers.ModelSerializer):
class Meta:
model = Invoice
fields = ('number', 'amount')
views.py
from rest_framework import status
from rest_framework.response import Response
from rest_framework.views import APIView
from core.models import Invoice
from core.serializers import InvoiceSerializer
class InvoiceAPIView(APIView):
def post(self, request):
serializer = InvoiceSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
serializer.save(user=request.user, status=Invoice.SENT)
return Response(status=status.HTTP_201_CREATED)
Very similar example, using the same models.py and serializers.py as in the previous example.
views.py
from rest_framework.viewsets import ModelViewSet
from core.models import Invoice
from core.serializers import InvoiceSerializer
class InvoiceViewSet(ModelViewSet):
queryset = Invoice.objects.all()
serializer_class = InvoiceSerializer
def perform_create(self, serializer):
serializer.save(user=self.request.user, status=Invoice.SENT)
How to Use Date Picker with Django [Simple is Better Than Complex]
In this tutorial we are going to explore three date/datetime pickers options that you can easily use in a Django project. We are going to explore how to do it manually first, then how to set up a custom widget and finally how to use a third-party Django app with support to datetime pickers.
The implementation of a date picker is mostly done on the front-end.
The key part of the implementation is to assure Django will receive the date input value in the correct format, and also that Django will be able to reproduce the format when rendering a form with initial data.
We can also use custom widgets to provide a deeper integration between the front-end and back-end and also to promote better reuse throughout a project.
In the next sections we are going to explore following date pickers:
Tempus Dominus Bootstrap 4 Docs Source
XDSoft DateTimePicker Docs Source
Fengyuan Chen’s Datepicker Docs Source
This is a great JavaScript library and it integrate well with Bootstrap 4. The downside is that it requires moment.js and sort of need Font-Awesome for the icons.
It only make sense to use this library with you are already using Bootstrap 4 + jQuery, otherwise the list of CSS and JS may look a little bit overwhelming.
To install it you can use their CDN or download the latest release from their GitHub Releases page.
If you downloaded the code from the releases page, grab the processed code from the build/ folder.
Below, a static HTML example of the datepicker:
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<title>Static Example</title>
<!-- Bootstrap 4 -->
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/css/bootstrap.min.css" integrity="sha384-GJzZqFGwb1QTTN6wy59ffF1BuGJpLSa9DkKMp0DgiMDm4iYMj70gZWKYbI706tWS" crossorigin="anonymous">
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.6/umd/popper.min.js" integrity="sha384-wHAiFfRlMFy6i5SRaxvfOCifBUQy1xHdJ/yoi7FRNXMRBu5WHdZYu1hA6ZOblgut" crossorigin="anonymous"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/js/bootstrap.min.js" integrity="sha384-B0UglyR+jN6CkvvICOB2joaf5I4l3gm9GU6Hc1og6Ls7i6U/mkkaduKaBhlAXv9k" crossorigin="anonymous"></script>
<!-- Font Awesome -->
<link href="https://stackpath.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css" rel="stylesheet" integrity="sha384-wvfXpqpZZVQGK6TAh5PVlGOfQNHSoD2xbE+QkPxCAFlNEevoEH3Sl0sibVcOQVnN" crossorigin="anonymous">
<!-- Moment.js -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.23.0/moment.min.js" integrity="sha256-VBLiveTKyUZMEzJd6z2mhfxIqz3ZATCuVMawPZGzIfA=" crossorigin="anonymous"></script>
<!-- Tempus Dominus Bootstrap 4 -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/tempusdominus-bootstrap-4/5.1.2/css/tempusdominus-bootstrap-4.min.css" integrity="sha256-XPTBwC3SBoWHSmKasAk01c08M6sIA5gF5+sRxqak2Qs=" crossorigin="anonymous" />
<script src="https://cdnjs.cloudflare.com/ajax/libs/tempusdominus-bootstrap-4/5.1.2/js/tempusdominus-bootstrap-4.min.js" integrity="sha256-z0oKYg6xiLq3yJGsp/LsY9XykbweQlHl42jHv2XTBz4=" crossorigin="anonymous"></script>
</head>
<body>
<div class="input-group date" id="datetimepicker1" data-target-input="nearest">
<input type="text" class="form-control datetimepicker-input" data-target="#datetimepicker1"/>
<div class="input-group-append" data-target="#datetimepicker1" data-toggle="datetimepicker">
<div class="input-group-text"><i class="fa fa-calendar"></i></div>
</div>
</div>
<script>
$(function () {
$("#datetimepicker1").datetimepicker();
});
</script>
</body>
</html>
The challenge now is to have this input snippet integrated with a Django form.
forms.py
from django import forms
class DateForm(forms.Form):
date = forms.DateTimeField(
input_formats=['%d/%m/%Y %H:%M'],
widget=forms.DateTimeInput(attrs={
'class': 'form-control datetimepicker-input',
'data-target': '#datetimepicker1'
})
)
template
<div class="input-group date" id="datetimepicker1" data-target-input="nearest">
{{ form.date }}
<div class="input-group-append" data-target="#datetimepicker1" data-toggle="datetimepicker">
<div class="input-group-text"><i class="fa fa-calendar"></i></div>
</div>
</div>
<script>
$(function () {
$("#datetimepicker1").datetimepicker({
format: 'DD/MM/YYYY HH:mm',
});
});
</script>
The script tag can be placed anywhere because the snippet $(function () { ... });
will run the datetimepicker
initialization when the page is ready. The only requirement is that this script tag is placed after the jQuery script
tag.
You can create the widget in any app you want, here I’m going to consider we have a Django app named core.
core/widgets.py
from django.forms import DateTimeInput
class BootstrapDateTimePickerInput(DateTimeInput):
template_name = 'widgets/bootstrap_datetimepicker.html'
def get_context(self, name, value, attrs):
datetimepicker_id = 'datetimepicker_{name}'.format(name=name)
if attrs is None:
attrs = dict()
attrs['data-target'] = '#{id}'.format(id=datetimepicker_id)
attrs['class'] = 'form-control datetimepicker-input'
context = super().get_context(name, value, attrs)
context['widget']['datetimepicker_id'] = datetimepicker_id
return context
In the implementation above we generate a unique ID datetimepicker_id
and also include it in the widget context.
Then the front-end implementation is done inside the widget HTML snippet.
widgets/bootstrap_datetimepicker.html
<div class="input-group date" id="{{ widget.datetimepicker_id }}" data-target-input="nearest">
{% include "django/forms/widgets/input.html" %}
<div class="input-group-append" data-target="#{{ widget.datetimepicker_id }}" data-toggle="datetimepicker">
<div class="input-group-text"><i class="fa fa-calendar"></i></div>
</div>
</div>
<script>
$(function () {
$("#{{ widget.datetimepicker_id }}").datetimepicker({
format: 'DD/MM/YYYY HH:mm',
});
});
</script>
Note how we make use of the built-in django/forms/widgets/input.html
template.
Now the usage:
core/forms.py
from .widgets import BootstrapDateTimePickerInput
class DateForm(forms.Form):
date = forms.DateTimeField(
input_formats=['%d/%m/%Y %H:%M'],
widget=BootstrapDateTimePickerInput()
)
Now simply render the field:
template
{{ form.date }}
The good thing about having the widget is that your form could have several date fields using the widget and you could simply render the whole form like:
<form method="post">
{% csrf_token %}
{{ form.as_p }}
<input type="submit" value="Submit">
</form>
The XDSoft DateTimePicker is a very versatile date picker and doesn’t rely on moment.js or Bootstrap, although it looks good in a Bootstrap website.
It is easy to use and it is very straightforward.
You can download the source from GitHub releases page.
Below, a static example so you can see the minimum requirements and how all the pieces come together:
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<title>Static Example</title>
<!-- jQuery -->
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
<!-- XDSoft DateTimePicker -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/jquery-datetimepicker/2.5.20/jquery.datetimepicker.min.css" integrity="sha256-DOS9W6NR+NFe1fUhEE0PGKY/fubbUCnOfTje2JMDw3Y=" crossorigin="anonymous" />
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery-datetimepicker/2.5.20/jquery.datetimepicker.full.min.js" integrity="sha256-FEqEelWI3WouFOo2VWP/uJfs1y8KJ++FLh2Lbqc8SJk=" crossorigin="anonymous"></script>
</head>
<body>
<input id="datetimepicker" type="text">
<script>
$(function () {
$("#datetimepicker").datetimepicker();
});
</script>
</body>
</html>
A basic integration with Django would look like this:
forms.py
from django import forms
class DateForm(forms.Form):
date = forms.DateTimeField(input_formats=['%d/%m/%Y %H:%M'])
Simple form, default widget, nothing special.
Now using it on the template:
template
{{ form.date }}
<script>
$(function () {
$("#id_date").datetimepicker({
format: 'd/m/Y H:i',
});
});
</script>
The id_date
is the default ID Django generates for the form fields (id_
+ name
).
core/widgets.py
from django.forms import DateTimeInput
class XDSoftDateTimePickerInput(DateTimeInput):
template_name = 'widgets/xdsoft_datetimepicker.html'
widgets/xdsoft_datetimepicker.html
{% include "django/forms/widgets/input.html" %}
<script>
$(function () {
$("input[name='{{ widget.name }}']").datetimepicker({
format: 'd/m/Y H:i',
});
});
</script>
To have a more generic implementation, this time we are selecting the field to initialize the component using its name instead of its id, should the user change the id prefix.
Now the usage:
core/forms.py
from django import forms
from .widgets import XDSoftDateTimePickerInput
class DateForm(forms.Form):
date = forms.DateTimeField(
input_formats=['%d/%m/%Y %H:%M'],
widget=XDSoftDateTimePickerInput()
)
template
{{ form.date }}
This is a very beautiful and minimalist date picker. Unfortunately there is no time support. But if you only need dates this is a great choice.
To install this datepicker you can either use their CDN or download the sources from their GitHub releases page. Please note that they do not provide a compiled/processed JavaScript files. But you can download those to your local machine using the CDN.
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<title>Static Example</title>
<style>body {font-family: Arial, sans-serif;}</style>
<!-- jQuery -->
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
<!-- Fengyuan Chen's Datepicker -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/datepicker/0.6.5/datepicker.min.css" integrity="sha256-b88RdwbRJEzRx95nCuuva+hO5ExvXXnpX+78h8DjyOE=" crossorigin="anonymous" />
<script src="https://cdnjs.cloudflare.com/ajax/libs/datepicker/0.6.5/datepicker.min.js" integrity="sha256-/7FLTdzP6CfC1VBAj/rsp3Rinuuu9leMRGd354hvk0k=" crossorigin="anonymous"></script>
</head>
<body>
<input id="datepicker">
<script>
$(function () {
$("#datepicker").datepicker();
});
</script>
</body>
</html>
A basic integration with Django (note that we are now using DateField
instead of DateTimeField
):
forms.py
from django import forms
class DateForm(forms.Form):
date = forms.DateTimeField(input_formats=['%d/%m/%Y %H:%M'])
template
{{ form.date }}
<script>
$(function () {
$("#id_date").datepicker({
format:'dd/mm/yyyy',
});
});
</script>
core/widgets.py
from django.forms import DateInput
class FengyuanChenDatePickerInput(DateInput):
template_name = 'widgets/fengyuanchen_datepicker.html'
widgets/fengyuanchen_datepicker.html
{% include "django/forms/widgets/input.html" %}
<script>
$(function () {
$("input[name='{{ widget.name }}']").datepicker({
format:'dd/mm/yyyy',
});
});
</script>
Usage:
core/forms.py
from django import forms
from .widgets import FengyuanChenDatePickerInput
class DateForm(forms.Form):
date = forms.DateTimeField(
input_formats=['%d/%m/%Y %H:%M'],
widget=FengyuanChenDatePickerInput()
)
template
{{ form.date }}
The implementation is very similar no matter what date/datetime picker you are using. Hopefully this tutorial provided some insights on how to integrate this kind of frontend library to a Django project.
As always, the best source of information about each of those libraries are their official documentation.
I also created an example project to show the usage and implementation of the widgets for each of the libraries presented in this tutorial. Grab the source code at github.com/sibtc/django-datetimepicker-example.
How to Implement Grouped Model Choice Field [Simple is Better Than Complex]
The Django forms API have two field types to work with multiple options: ChoiceField
and ModelChoiceField
.
Both use select input as the default widget and they work in a similar way, except that ModelChoiceField
is designed
to handle QuerySets and work with foreign key relationships.
A basic implementation using a ChoiceField
would be:
class ExpenseForm(forms.Form):
CHOICES = (
(11, 'Credit Card'),
(12, 'Student Loans'),
(13, 'Taxes'),
(21, 'Books'),
(22, 'Games'),
(31, 'Groceries'),
(32, 'Restaurants'),
)
amount = forms.DecimalField()
date = forms.DateField()
category = forms.ChoiceField(choices=CHOICES)
You can also organize the choices in groups to generate the <optgroup>
tags like this:
class ExpenseForm(forms.Form):
CHOICES = (
('Debt', (
(11, 'Credit Card'),
(12, 'Student Loans'),
(13, 'Taxes'),
)),
('Entertainment', (
(21, 'Books'),
(22, 'Games'),
)),
('Everyday', (
(31, 'Groceries'),
(32, 'Restaurants'),
)),
)
amount = forms.DecimalField()
date = forms.DateField()
category = forms.ChoiceField(choices=CHOICES)
When you are using a ModelChoiceField
unfortunately there is no built-in solution.
Recently I found a nice solution on Django’s ticket tracker, where
someone proposed adding an opt_group
argument to the ModelChoiceField
.
While the discussion is still ongoing, Simon Charette proposed a really good solution.
Let’s see how we can integrate it in our project.
First consider the following models:
models.py
from django.db import models
class Category(models.Model):
name = models.CharField(max_length=30)
parent = models.ForeignKey('Category', on_delete=models.CASCADE, null=True)
def __str__(self):
return self.name
class Expense(models.Model):
amount = models.DecimalField(max_digits=10, decimal_places=2)
date = models.DateField()
category = models.ForeignKey(Category, on_delete=models.CASCADE)
def __str__(self):
return self.amount
So now our category instead of being a regular choices field it is now a model and the Expense
model have a
relationship with it using a foreign key.
If we create a ModelForm
using this model, the result will be very similar to our first example.
To simulate a grouped categories you will need the code below. First create a new module named fields.py:
fields.py
from functools import partial
from itertools import groupby
from operator import attrgetter
from django.forms.models import ModelChoiceIterator, ModelChoiceField
class GroupedModelChoiceIterator(ModelChoiceIterator):
def __init__(self, field, groupby):
self.groupby = groupby
super().__init__(field)
def __iter__(self):
if self.field.empty_label is not None:
yield ("", self.field.empty_label)
queryset = self.queryset
# Can't use iterator() when queryset uses prefetch_related()
if not queryset._prefetch_related_lookups:
queryset = queryset.iterator()
for group, objs in groupby(queryset, self.groupby):
yield (group, [self.choice(obj) for obj in objs])
class GroupedModelChoiceField(ModelChoiceField):
def __init__(self, *args, choices_groupby, **kwargs):
if isinstance(choices_groupby, str):
choices_groupby = attrgetter(choices_groupby)
elif not callable(choices_groupby):
raise TypeError('choices_groupby must either be a str or a callable accepting a single argument')
self.iterator = partial(GroupedModelChoiceIterator, groupby=choices_groupby)
super().__init__(*args, **kwargs)
And here is how you use it in your forms:
forms.py
from django import forms
from .fields import GroupedModelChoiceField
from .models import Category, Expense
class ExpenseForm(forms.ModelForm):
category = GroupedModelChoiceField(
queryset=Category.objects.exclude(parent=None),
choices_groupby='parent'
)
class Meta:
model = Expense
fields = ('amount', 'date', 'category')
Because in the example above I used a self-referencing relationship I had to add the exclude(parent=None)
to hide
the “group categories” from showing up in the select input as a valid option.
You can download the code used in this tutorial from GitHub: github.com/sibtc/django-grouped-choice-field-example
Credits to the solution Simon Charette on Django Ticket Track.
How to Use JWT Authentication with Django REST Framework [Simple is Better Than Complex]
JWT stand for JSON Web Token and it is an authentication strategy used by client/server applications where the client is a Web application using JavaScript and some frontend framework like Angular, React or VueJS.
In this tutorial we are going to explore the specifics of JWT authentication. If you want to learn more about Token-based authentication using Django REST Framework (DRF), or if you want to know how to start a new DRF project you can read this tutorial: How to Implement Token Authentication using Django REST Framework. The concepts are the same, we are just going to switch the authentication backend.
The JWT is just an authorization token that should be included in all requests:
curl http://127.0.0.1:8000/hello/ -H 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQzODI4NDMxLCJqdGkiOiI3ZjU5OTdiNzE1MGQ0NjU3OWRjMmI0OTE2NzA5N2U3YiIsInVzZXJfaWQiOjF9.Ju70kdcaHKn1Qaz8H42zrOYk0Jx9kIckTn9Xx7vhikY'
The JWT is acquired by exchanging an username + password for an access token and an refresh token.
The access token is usually short-lived (expires in 5 min or so, can be customized though).
The refresh token lives a little bit longer (expires in 24 hours, also customizable). It is comparable to an authentication session. After it expires, you need a full login with username + password again.
Why is that?
It’s a security feature and also it’s because the JWT holds a little bit more information. If you look closely the example I gave above, you will see the token is composed by three parts:
xxxxx.yyyyy.zzzzz
Those are three distinctive parts that compose a JWT:
header.payload.signature
So we have here:
header = eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9
payload = eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQzODI4NDMxLCJqdGkiOiI3ZjU5OTdiNzE1MGQ0NjU3OWRjMmI0OTE2NzA5N2U3YiIsInVzZXJfaWQiOjF9
signature = Ju70kdcaHKn1Qaz8H42zrOYk0Jx9kIckTn9Xx7vhikY
This information is encoded using Base64. If we decode, we will see something like this:
header
{
"typ": "JWT",
"alg": "HS256"
}
payload
{
"token_type": "access",
"exp": 1543828431,
"jti": "7f5997b7150d46579dc2b49167097e7b",
"user_id": 1
}
signature
The signature is issued by the JWT backend, using the header base64 + payload base64 + SECRET_KEY
. Upon each request
this signature is verified. If any information in the header or in the payload was changed by the client it will
invalidate the signature. The only way of checking and validating the signature is by using your application’s
SECRET_KEY
. Among other things, that’s why you should always keep your SECRET_KEY
secret!
For this tutorial we are going to use the djangorestframework_simplejwt
library, recommended by the DRF developers.
pip install djangorestframework_simplejwt
settings.py
REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': [
'rest_framework_simplejwt.authentication.JWTAuthentication',
],
}
urls.py
from django.urls import path
from rest_framework_simplejwt import views as jwt_views
urlpatterns = [
# Your URLs...
path('api/token/', jwt_views.TokenObtainPairView.as_view(), name='token_obtain_pair'),
path('api/token/refresh/', jwt_views.TokenRefreshView.as_view(), name='token_refresh'),
]
For this tutorial I will use the following route and API view:
views.py
from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated
class HelloView(APIView):
permission_classes = (IsAuthenticated,)
def get(self, request):
content = {'message': 'Hello, World!'}
return Response(content)
urls.py
from django.urls import path
from myapi.core import views
urlpatterns = [
path('hello/', views.HelloView.as_view(), name='hello'),
]
I will be using HTTPie to consume the API endpoints via the terminal. But you can also use cURL (readily available in many OS) to try things out locally.
Or alternatively, use the DRF web interface by accessing the endpoint URLs like this:
First step is to authenticate and obtain the token. The endpoint is /api/token/
and it only accepts POST requests.
http post http://127.0.0.1:8000/api/token/ username=vitor password=123
So basically your response body is the two tokens:
{
"access": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQ1MjI0MjU5LCJqdGkiOiIyYmQ1NjI3MmIzYjI0YjNmOGI1MjJlNThjMzdjMTdlMSIsInVzZXJfaWQiOjF9.D92tTuVi_YcNkJtiLGHtcn6tBcxLCBxz9FKD3qzhUg8",
"refresh": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoicmVmcmVzaCIsImV4cCI6MTU0NTMxMDM1OSwianRpIjoiMjk2ZDc1ZDA3Nzc2NDE0ZjkxYjhiOTY4MzI4NGRmOTUiLCJ1c2VyX2lkIjoxfQ.rA-mnGRg71NEW_ga0sJoaMODS5ABjE5HnxJDb0F8xAo"
}
After that you are going to store both the access token and the refresh token on the client side, usually in the localStorage.
In order to access the protected views on the backend (i.e., the API endpoints that require authentication), you should include the access token in the header of all requests, like this:
http http://127.0.0.1:8000/hello/ "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQ1MjI0MjAwLCJqdGkiOiJlMGQxZDY2MjE5ODc0ZTY3OWY0NjM0ZWU2NTQ2YTIwMCIsInVzZXJfaWQiOjF9.9eHat3CvRQYnb5EdcgYFzUyMobXzxlAVh_IAgqyvzCE"
You can use this access token for the next five minutes.
After five min, the token will expire, and if you try to access the view again, you are going to get the following error:
http http://127.0.0.1:8000/hello/ "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQ1MjI0MjAwLCJqdGkiOiJlMGQxZDY2MjE5ODc0ZTY3OWY0NjM0ZWU2NTQ2YTIwMCIsInVzZXJfaWQiOjF9.9eHat3CvRQYnb5EdcgYFzUyMobXzxlAVh_IAgqyvzCE"
To get a new access token, you should use the refresh token endpoint /api/token/refresh/
posting the
refresh token:
http post http://127.0.0.1:8000/api/token/refresh/ refresh=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoicmVmcmVzaCIsImV4cCI6MTU0NTMwODIyMiwianRpIjoiNzAyOGFlNjc0ZTdjNDZlMDlmMzUwYjg3MjU1NGUxODQiLCJ1c2VyX2lkIjoxfQ.Md8AO3dDrQBvWYWeZsd_A1J39z6b6HEwWIUZ7ilOiPE
The return is a new access token that you should use in the subsequent requests.
The refresh token is valid for the next 24 hours. When it finally expires too, the user will need to perform a full authentication again using their username and password to get a new set of access token + refresh token.
At first glance the refresh token may look pointless, but in fact it is necessary to make sure the user still have the correct permissions. If your access token have a long expire time, it may take longer to update the information associated with the token. That’s because the authentication check is done by cryptographic means, instead of querying the database and verifying the data. So some information is sort of cached.
There is also a security aspect, in a sense that the refresh token only travel in the POST data. And the access token is sent via HTTP header, which may be logged along the way. So this also give a short window, should your access token be compromised.
This should cover the basics on the backend implementation. It’s worth checking the djangorestframework_simplejwt settings for further customization and to get a better idea of what the library offers.
The implementation on the frontend depends on what framework/library you are using. Some libraries and articles covering popular frontend frameworks like angular/react/vue.js:
The code used in this tutorial is available at github.com/sibtc/drf-jwt-example.
Advanced Form Rendering with Django Crispy Forms [Simple is Better Than Complex]
[Django 2.1.3 / Python 3.6.5 / Bootstrap 4.1.3]
In this tutorial we are going to explore some of the Django Crispy Forms features to handle advanced/custom forms rendering. This blog post started as a discussion in our community forum, so I decided to compile the insights and solutions in a blog post to benefit a wider audience.
Table of Contents
Throughout this tutorial we are going to implement the following Bootstrap 4 form using Django APIs:
This was taken from Bootstrap 4 official documentation as an example of how to use form rows.
NOTE!
The examples below refer to a base.html
template. Consider the code below:
base.html
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">
</head>
<body>
<div class="container">
{% block content %}
{% endblock %}
</div>
</body>
</html>
Install it using pip:
pip install django-crispy-forms
Add it to your INSTALLED_APPS
and select which styles to use:
settings.py
INSTALLED_APPS = [
...
'crispy_forms',
]
CRISPY_TEMPLATE_PACK = 'bootstrap4'
For detailed instructions about how to install django-crispy-forms
, please refer to this tutorial:
How to Use Bootstrap 4 Forms With Django
The Python code required to represent the form above is the following:
from django import forms
STATES = (
('', 'Choose...'),
('MG', 'Minas Gerais'),
('SP', 'Sao Paulo'),
('RJ', 'Rio de Janeiro')
)
class AddressForm(forms.Form):
email = forms.CharField(widget=forms.TextInput(attrs={'placeholder': 'Email'}))
password = forms.CharField(widget=forms.PasswordInput())
address_1 = forms.CharField(
label='Address',
widget=forms.TextInput(attrs={'placeholder': '1234 Main St'})
)
address_2 = forms.CharField(
widget=forms.TextInput(attrs={'placeholder': 'Apartment, studio, or floor'})
)
city = forms.CharField()
state = forms.ChoiceField(choices=STATES)
zip_code = forms.CharField(label='Zip')
check_me_out = forms.BooleanField(required=False)
In this case I’m using a regular Form
, but it could also be a ModelForm
based on a Django model with similar
fields. The state
field and the STATES
choices could be either a foreign key or anything else. Here I’m just using
a simple static example with three Brazilian states.
Template:
{% extends 'base.html' %}
{% block content %}
<form method="post">
{% csrf_token %}
<table>{{ form.as_table }}</table>
<button type="submit">Sign in</button>
</form>
{% endblock %}
Rendered HTML:
Rendered HTML with validation state:
Same form code as in the example before.
Template:
{% extends 'base.html' %}
{% load crispy_forms_tags %}
{% block content %}
<form method="post">
{% csrf_token %}
{{ form|crispy }}
<button type="submit" class="btn btn-primary">Sign in</button>
</form>
{% endblock %}
Rendered HTML:
Rendered HTML with validation state:
Same form code as in the first example.
Template:
{% extends 'base.html' %}
{% load crispy_forms_tags %}
{% block content %}
<form method="post">
{% csrf_token %}
<div class="form-row">
<div class="form-group col-md-6 mb-0">
{{ form.email|as_crispy_field }}
</div>
<div class="form-group col-md-6 mb-0">
{{ form.password|as_crispy_field }}
</div>
</div>
{{ form.address_1|as_crispy_field }}
{{ form.address_2|as_crispy_field }}
<div class="form-row">
<div class="form-group col-md-6 mb-0">
{{ form.city|as_crispy_field }}
</div>
<div class="form-group col-md-4 mb-0">
{{ form.state|as_crispy_field }}
</div>
<div class="form-group col-md-2 mb-0">
{{ form.zip_code|as_crispy_field }}
</div>
</div>
{{ form.check_me_out|as_crispy_field }}
<button type="submit" class="btn btn-primary">Sign in</button>
</form>
{% endblock %}
Rendered HTML:
Rendered HTML with validation state:
We could use the crispy forms layout helpers to achieve the same result as above. The implementation is done inside
the form __init__
method:
forms.py
from django import forms
from crispy_forms.helper import FormHelper
from crispy_forms.layout import Layout, Submit, Row, Column
STATES = (
('', 'Choose...'),
('MG', 'Minas Gerais'),
('SP', 'Sao Paulo'),
('RJ', 'Rio de Janeiro')
)
class AddressForm(forms.Form):
email = forms.CharField(widget=forms.TextInput(attrs={'placeholder': 'Email'}))
password = forms.CharField(widget=forms.PasswordInput())
address_1 = forms.CharField(
label='Address',
widget=forms.TextInput(attrs={'placeholder': '1234 Main St'})
)
address_2 = forms.CharField(
widget=forms.TextInput(attrs={'placeholder': 'Apartment, studio, or floor'})
)
city = forms.CharField()
state = forms.ChoiceField(choices=STATES)
zip_code = forms.CharField(label='Zip')
check_me_out = forms.BooleanField(required=False)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.helper = FormHelper()
self.helper.layout = Layout(
Row(
Column('email', css_class='form-group col-md-6 mb-0'),
Column('password', css_class='form-group col-md-6 mb-0'),
css_class='form-row'
),
'address_1',
'address_2',
Row(
Column('city', css_class='form-group col-md-6 mb-0'),
Column('state', css_class='form-group col-md-4 mb-0'),
Column('zip_code', css_class='form-group col-md-2 mb-0'),
css_class='form-row'
),
'check_me_out',
Submit('submit', 'Sign in')
)
The template implementation is very minimal:
{% extends 'base.html' %}
{% load crispy_forms_tags %}
{% block content %}
{% crispy form %}
{% endblock %}
The end result is the same.
Rendered HTML:
Rendered HTML with validation state:
You may also customize the field template and easily reuse throughout your application. Let’s say we want to use the custom Bootstrap 4 checkbox:
From the official documentation, the necessary HTML to output the input above:
<div class="custom-control custom-checkbox">
<input type="checkbox" class="custom-control-input" id="customCheck1">
<label class="custom-control-label" for="customCheck1">Check this custom checkbox</label>
</div>
Using the crispy forms API, we can create a new template for this custom field in our “templates” folder:
custom_checkbox.html
{% load crispy_forms_field %}
<div class="form-group">
<div class="custom-control custom-checkbox">
{% crispy_field field 'class' 'custom-control-input' %}
<label class="custom-control-label" for="{{ field.id_for_label }}">{{ field.label }}</label>
</div>
</div>
Now we can create a new crispy field, either in our forms.py module or in a new Python module named fields.py or something.
forms.py
from crispy_forms.layout import Field
class CustomCheckbox(Field):
template = 'custom_checkbox.html'
We can use it now in our form definition:
forms.py
class CustomFieldForm(AddressForm):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.helper = FormHelper()
self.helper.layout = Layout(
Row(
Column('email', css_class='form-group col-md-6 mb-0'),
Column('password', css_class='form-group col-md-6 mb-0'),
css_class='form-row'
),
'address_1',
'address_2',
Row(
Column('city', css_class='form-group col-md-6 mb-0'),
Column('state', css_class='form-group col-md-4 mb-0'),
Column('zip_code', css_class='form-group col-md-2 mb-0'),
css_class='form-row'
),
CustomCheckbox('check_me_out'), # <-- Here
Submit('submit', 'Sign in')
)
(PS: the AddressForm
was defined here and is the same as in the previous example.)
The end result:
There is much more Django Crispy Forms can do. Hopefully this tutorial gave you some extra insights on how to use the form helpers and layout classes. As always, the official documentation is the best source of information:
Django Crispy Forms layouts docs
Also, the code used in this tutorial is available on GitHub at github.com/sibtc/advanced-crispy-forms-examples.
How to Implement Token Authentication using Django REST Framework [Simple is Better Than Complex]
In this tutorial you are going to learn how to implement Token-based authentication using Django REST Framework (DRF). The token authentication works by exchanging username and password for a token that will be used in all subsequent requests so to identify the user on the server side.
The specifics of how the authentication is handled on the client side vary a lot depending on the technology/language/framework you are working with. The client could be a mobile application using iOS or Android. It could be a desktop application using Python or C++. It could be a Web application using PHP or Ruby.
But once you understand the overall process, it’s easier to find the necessary resources and documentation for your specific use case.
Token authentication is suitable for client-server applications, where the token is safely stored. You should never expose your token, as it would be (sort of) equivalent of a handing out your username and password.
Table of Contents
So let’s start from the very beginning. Install Django and DRF:
pip install django
pip install djangorestframework
Create a new Django project:
django-admin.py startproject myapi .
Navigate to the myapi folder:
cd myapi
Start a new app. I will call my app core:
django-admin.py startapp core
Here is what your project structure should look like:
myapi/
|-- core/
| |-- migrations/
| |-- __init__.py
| |-- admin.py
| |-- apps.py
| |-- models.py
| |-- tests.py
| +-- views.py
|-- __init__.py
|-- settings.py
|-- urls.py
+-- wsgi.py
manage.py
Add the core app (you created) and the rest_framework app (you installed) to the INSTALLED_APPS
, inside the
settings.py module:
myapi/settings.py
INSTALLED_APPS = [
# Django Apps
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
# Third-Party Apps
'rest_framework',
# Local Apps (Your project's apps)
'myapi.core',
]
Return to the project root (the folder where the manage.py script is), and migrate the database:
python manage.py migrate
Let’s create our first API view just to test things out:
myapi/core/views.py
from rest_framework.views import APIView
from rest_framework.response import Response
class HelloView(APIView):
def get(self, request):
content = {'message': 'Hello, World!'}
return Response(content)
Now register a path in the urls.py module:
myapi/urls.py
from django.urls import path
from myapi.core import views
urlpatterns = [
path('hello/', views.HelloView.as_view(), name='hello'),
]
So now we have an API with just one endpoint /hello/
that we can perform GET
requests. We can use the browser to
consume this endpoint, just by accessing the URL http://127.0.0.1:8000/hello/
:
We can also ask to receive the response as plain JSON data by passing the format
parameter in the querystring like
http://127.0.0.1:8000/hello/?format=json
:
Both methods are fine to try out a DRF API, but sometimes a command line tool is more handy as we can play more easily with the requests headers. You can use cURL, which is widely available on all major Linux/macOS distributions:
curl http://127.0.0.1:8000/hello/
But usually I prefer to use HTTPie, which is a pretty awesome Python command line tool:
http http://127.0.0.1:8000/hello/
Now let’s protect this API endpoint so we can implement the token authentication:
myapi/core/views.py
from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated # <-- Here
class HelloView(APIView):
permission_classes = (IsAuthenticated,) # <-- And here
def get(self, request):
content = {'message': 'Hello, World!'}
return Response(content)
Try again to access the API endpoint:
http http://127.0.0.1:8000/hello/
And now we get an HTTP 403 Forbidden error. Now let’s implement the token authentication so we can access this endpoint.
We need to add two pieces of information in our settings.py module. First include rest_framework.authtoken to
your INSTALLED_APPS
and include the TokenAuthentication
to REST_FRAMEWORK
:
myapi/settings.py
INSTALLED_APPS = [
# Django Apps
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
# Third-Party Apps
'rest_framework',
'rest_framework.authtoken', # <-- Here
# Local Apps (Your project's apps)
'myapi.core',
]
REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': [
'rest_framework.authentication.TokenAuthentication', # <-- And here
],
}
Migrate the database to create the table that will store the authentication tokens:
python manage.py migrate
Now we need a user account. Let’s just create one using the manage.py
command line utility:
python manage.py createsuperuser --username vitor --email vitor@example.com
The easiest way to generate a token, just for testing purpose, is using the command line utility again:
python manage.py drf_create_token vitor
This piece of information, the random string 9054f7aa9305e012b3c2300408c3dfdf390fcddf
is what we are going to use
next to authenticate.
But now that we have the TokenAuthentication
in place, let’s try to make another request to our /hello/
endpoint:
http http://127.0.0.1:8000/hello/
Notice how our API is now providing some extra information to the client on the required authentication method.
So finally, let’s use our token!
http http://127.0.0.1:8000/hello/ 'Authorization: Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf'
And that’s pretty much it. For now on, on all subsequent request you should include the header Authorization: Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf
.
The formatting looks weird and usually it is a point of confusion on how to set this header. It will depend on the client and how to set the HTTP request header.
For example, if we were using cURL, the command would be something like this:
curl http://127.0.0.1:8000/hello/ -H 'Authorization: Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf'
Or if it was a Python requests call:
import requests
url = 'http://127.0.0.1:8000/hello/'
headers = {'Authorization': 'Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf'}
r = requests.get(url, headers=headers)
Or if we were using Angular, you could implement an HttpInterceptor
and set a header:
import { Injectable } from '@angular/core';
import { HttpRequest, HttpHandler, HttpEvent, HttpInterceptor } from '@angular/common/http';
import { Observable } from 'rxjs';
@Injectable()
export class AuthInterceptor implements HttpInterceptor {
intercept(request: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> {
const user = JSON.parse(localStorage.getItem('user'));
if (user && user.token) {
request = request.clone({
setHeaders: {
Authorization: `Token ${user.accessToken}`
}
});
}
return next.handle(request);
}
}
The DRF provide an endpoint for the users to request an authentication token using their username and password.
Include the following route to the urls.py module:
myapi/urls.py
from django.urls import path
from rest_framework.authtoken.views import obtain_auth_token # <-- Here
from myapi.core import views
urlpatterns = [
path('hello/', views.HelloView.as_view(), name='hello'),
path('api-token-auth/', obtain_auth_token, name='api_token_auth'), # <-- And here
]
So now we have a brand new API endpoint, which is /api-token-auth/
. Let’s first inspect it:
http http://127.0.0.1:8000/api-token-auth/
It doesn’t handle GET requests. Basically it’s just a view to receive a POST request with username and password.
Let’s try again:
http post http://127.0.0.1:8000/api-token-auth/ username=vitor password=123
The response body is the token associated with this particular user. After this point you store this token and apply it to the future requests.
Then, again, the way you are going to make the POST request to the API depends on the language/framework you are using.
If this was an Angular client, you could store the token in the localStorage
, if this was a Desktop CLI application
you could store in a text file in the user’s home directory in a dot file.
Hopefully this tutorial provided some insights on how the token authentication works. I will try to follow up this tutorial providing some concrete examples of Angular applications, command line applications and Web clients as well.
It is important to note that the default Token implementation has some limitations such as only one token per user, no built-in way to set an expiry date to the token.
You can grab the code used in this tutorial at github.com/sibtc/drf-token-auth-example.
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Python GUI applicatie consistent backups met fsarchiver [linux blogs franz ulenaers]
Een
partitie van het type = "Linux LVM" kan gebruikt worden
voor logische volumen maar ook als "snapshot"
!
Een snapshot kan een exact kopie zijn van een logische
volume dat bevrozen is op een bepaald ogenblik : dit maakt het
mogelijk om consistente backups te maken van logische
volumen
terwijl de logische volumen in gebruik zijn !
Mijn fysische en logische volumen zijn als volgt aangemaakt :
fysische volume
pvcreate /dev/sda1
fysische volume groep
vgcreate mydell /dev/sda1
logische volumen
lvcreate -L 1G -n boot mydell
lvcreate -L 100G -n data mydell
lvcreate -L 50G -n home mydell
lvcreate -L 50G -n root mydell
lvcreate -L 1G swap mydell
procedures MyCloud [linux blogs franz ulenaers]
Procedure lftpUlefr01Cloudupload wordt gebruikt om een upload te doen van bestanden en mappen naar MyCloud
Procedure lftpUlefr01Cloudmirror wordt gebruikt om wijzigingen terug te halen
Beide procedures maken gebruik van het programma lftp ( dit is "Sophisticated file transfer program" ) en worden gebruikt om synchronisatie van laptop en desktop toe te laten
Procedures werden aangepast zodat verborgen bestanden en verborgen mappen ook worden verwerkt ,
alsook werden voor mirror bepaalde meestal onveranderde bestanden en mappen uitgefilterd (--exclude) zodanig dat deze niet opnieuw worden verwerkt
op Cloud blijven ze bestaan als backup maar op de verschillende laptops niet (dit werd gedaan voor oudere mails van 2016 maanden 2016-11 en 2016-12
en voor alle vorige maanden (dit tot en met september) van 2017 !
zie bijlagen
python GUI applicatie tune2fs [linux blogs franz ulenaers]
Created woensdag 18 oktober 2017
geschreven met programmeertaal python met gebruik van Gtk+ 3
starten in terminal met : sudo python mytune2fs.py
ofwel python source compileren en starten met gecompileerde versie
Python GUI applicatie myarchive.py [linux blogs franz ulenaers]
Created vrijdag 13 oktober 2017
start in terminal mode met :
* sudo python myarchive.py
* sudo python myarchive2.py
ofwel door gecompileerde versie te maken en de gegeneerde objecten te starten
python myfsck.py [linux blogs franz ulenaers]
Created vrijdag 13 oktober 2017
zie bijgeleverd bestand myfsck.py
Deze applicatie kan devices mounten en umounten maar is hoofdzakelijk bedoeld om het fsck comando uit te voeren
Root rechten zijn nodig !
hulp ?
* starten in terminal mode
* sudo python myfsck.py
Maken dat een bestand niet te wijzigen , niet te hernoemen is niet te deleten is in linux ! [linux blogs franz ulenaers]
hoe : sudo chattr +i /data/Encrypt/.encfs6.xml
je kunt het bestand niet wijzigen, je kunt het bestand niet hernoemen, je kunt het bestand niet deleten zelfs als je root zijt
Backup laptop [linux blogs franz ulenaers]
Linken in Linux [linux blogs franz ulenaers]
Op Linux kan men bestanden meervoudige benamingen geven, zo kun je een bestand op verschillende plaatsen in de boomstructuur van de bestanden opslaan , zonder extra plaats op harde schijf in te nemen (+-).
Er zijn twee soorten links :
harde links
symbolische links
Een harde link maakt gebruik van hetzelfde bestandsnummer (inode).
Een harde link geldt niet voor een directory !
Een harde link moet op zelfde bestandssysteem en oorspronkelijk bestand moet bestaan !
Een symbolische link , het bestand krijgt een nieuw bestandsnummer , het bestand waarop verwezen wordt hoeft niet te bestaan.
Een symbolische link gaat ook voor een directory.
Het bestand linuxcursus is 4,2M groot, inode nr 293800.
Samsung Galaxy Z Flip, S20(+) en S20 Ultra Hands-on [Laatste Artikelen - Webwereld]
Samsung nodigde ons uit op de drie allernieuwste smartphones van dichtbij te bekijken. Daar maakten wij dankbaar gebruik van en wij delen onze bevindingen met je.
Hands-on: Synology Virtual Machine Manager [Laatste Artikelen - Webwereld]
Dat je NAS tegenwoordig voor veel meer dan alleen het opslaan van bestanden kan worden gebruikt is inmiddels bekend, maar wist je ook dat je er virtuele machines mee kan beheren? Wij leggen je uit hoe.
Wat je moet weten over FIDO-sleutels [Laatste Artikelen - Webwereld]
Dankzij de FIDO2-standaard is het mogelijk om zonder wachtwoord toch veilig in te loggen bij diverse online diensten. Onder meer Microsoft en Google bieden hier al opties voor. Dit jaar volgen er waarschijnlijk meer organisaties die dit aanbieden.
Zo gebruik je je iPhone zonder Apple ID [Laatste Artikelen - Webwereld]
Tegenwoordig moet je voor zo’n beetje alles wat je online wilt doen een account aanmaken, zelfs als je niet van plan bent online te werken of als je gewoon geen zin hebt om je gegevens te delen met de fabrikant. Wij laten je vandaag zien hoe je dat voor elkaar krijgt met je iPhone of iPad.
Groot lek in Internet Explorer wordt al misbruikt in het wild [Laatste Artikelen - Webwereld]
Er is een nieuwe zero-day-kwetsbaarheid ontdekt in Microsoft Internet Explorer. Het nieuwe lek wordt al misbruikt en een beveiligingsupdate is nog niet beschikbaar.
Zo installeer je Chrome-extensies in de nieuwe Edge [Laatste Artikelen - Webwereld]
De nieuwe versie van Edge is gebouwd met code van het Chromium-project, maar in de standaardconfiguratie worden extensies uitsluitend geïnstalleerd via de Microsoft Store. Dat is gelukkig vrij eenvoudig aan te passen.
Windows 10-upgrade nog steeds gratis [Laatste Artikelen - Webwereld]
Microsoft gaf gebruikers enkele jaren geleden de mogelijkheid gratis te upgraden van Windows 7 naar Windows 10. Daarbij ging het af en toe zo ver dat zelfs gebruikers die dat niet wilden een upgrade kregen. De aanbieding is al lang en breed voorbij, maar gratis upgraden is nog steeds mogelijk en het is nu makkelijker dan ooit. Wij vertellen je hoe je dat doet.
Chrome, Edge, Firefox: Welke browser is het snelst? [Laatste Artikelen - Webwereld]
Er is veel veranderd op de markt voor pc-browsers. Ongeveer vijf jaar geleden was er nog meer concurrentie en geheel eigen ontwikkeling, nu zijn er nog maar twee engines over: die achter Chrome en die achter Firefox. Met de release van de Blink-gebaseerde Edge van Microsoft deze maand kijken we naar benachmarks en praktijktests.
Cooler Master herontwerpt koelpasta-tubes wegens drugsverdenkingen [Laatste Artikelen - Webwereld]
Cooler Master heeft het uiterlijk van z’n koelpasta-spuiten aangepast omdat het bedrijf het naar eigen zeggen beu is om steeds te moeten uitleggen aan ouders dat de inhoud geen drugs is, maar koelpasta.
stick mounten zonder root , labels zetten , maak een bestandensysteem clean [ulefr01 - blog franz ulenaers]
Embedded Linux Engineer [Job Openings]
You're eager to work with Linux in an exciting environment. You have a lot of PC equipement experience. Prior experience with embedded Linux or small footprint distributions is considered a plus. Region East/West Flanders
We're looking for someone capable of teaching Linux and/or Solaris professionally. Ideally the candidate has experience with teaching in Linux, possibly other non-Windows OSes as well.
Kernel Developer [Job Openings]
We're looking for someone with kernel device driver developement experience. Preferably, but not necessary with knowledge of AV or TV devices.
C/C++ Developers [Job Openings]
We're searching Linux C/C++ Developers. Region Leuven.
Feed | RSS | Last fetched | Next fetched after |
---|---|---|---|
Computable | XML | 15-02-2025, 22:34 | 16-02-2025, 01:34 |
GNOMON | XML | 15-02-2025, 22:34 | 16-02-2025, 01:34 |
http://www.h-online.com/news/atom.xml | XML | 15-02-2025, 22:34 | 16-02-2025, 01:34 |
https://www.heise.de/en | XML | 15-02-2025, 22:34 | 16-02-2025, 01:34 |
Job Openings | XML | 15-02-2025, 22:34 | 16-02-2025, 01:34 |
Laatste Artikelen - Webwereld | XML | 15-02-2025, 22:34 | 16-02-2025, 01:34 |
linux blogs franz ulenaers | XML | 15-02-2025, 22:34 | 16-02-2025, 01:34 |
Linux Journal - The Original Magazine of the Linux Community | XML | 15-02-2025, 22:34 | 16-02-2025, 01:34 |
Linux Today | XML | 15-02-2025, 22:34 | 16-02-2025, 01:34 |
OMG! Ubuntu! | XML | 15-02-2025, 22:34 | 16-02-2025, 01:34 |
Planet Python | XML | 15-02-2025, 22:34 | 16-02-2025, 01:34 |
Press Releases Archives - The Document Foundation Blog | XML | 15-02-2025, 22:34 | 16-02-2025, 01:34 |
Simple is Better Than Complex | XML | 15-02-2025, 22:34 | 16-02-2025, 01:34 |
Slashdot: Linux | XML | 15-02-2025, 22:34 | 16-02-2025, 01:34 |
Tech Drive-in | XML | 15-02-2025, 22:34 | 16-02-2025, 01:34 |
ulefr01 - blog franz ulenaers | XML | 15-02-2025, 22:34 | 16-02-2025, 01:34 |