07-10-2024

17:01

LibreOffice 24.8.1, the first minor release of the recently announced LibreOffice 24.8 family, is available for download [Press Releases Archives - The Document Foundation Blog]

The LibreOffice 24.8 family is optimised for the privacy-conscious office suite user who wants full control over the information they share

Berlin, 12 September 2024 – LibreOffice 24.8.1, the first minor release of the LibreOffice 24.8 family of the free, volunteer-supported office suite for Windows (Intel, AMD and ARM), macOS (Apple and Intel) and Linux, is available at www.libreoffice.org/download. For users who don’t need the latest features and prefer a more tested version, TDF maintains the previous LibreOffice 24.2 family, with several months of back-ported fixes. The current version is LibreOffice 24.2.6.

LibreOffice is the only software for creating documents that contain personal or confidential information that respects the privacy of the user – ensuring that the user is able to decide if and with whom to share the content they create. As such, LibreOffice is the best option for the privacy-conscious office suite user, and offers a feature set comparable to the leading product on the market.

In addition, LibreOffice offers a range of interface options to suit different user habits, from traditional to modern, and makes the most of different screen sizes by optimising the space available on the desktop to put the maximum number of features just a click or two away.

The biggest advantage over competing products is the LibreOffice Technology Engine, the single software platform on which desktop, mobile and cloud versions of LibreOffice – including those from ecosystem companies – are based. This allows LibreOffice to provide a better user experience and to produce identical and fully interoperable documents based on the two available ISO standards: the Open Document Format (ODT, ODS and ODP) and the proprietary Microsoft OOXML (DOCX, XLSX and PPTX). The latter hides a great deal of artificial complexity, which can cause problems for users who are confident that they are using a true open standard.

End users looking for support will be helped by the immediate availability of the LibreOffice 24.8 Getting Started Guide, which can be downloaded from the following link: books.libreoffice.org. In addition, they will be able to get first-level technical support from volunteers on the user mailing lists and the Ask LibreOffice website: ask.libreoffice.org.

A short video highlighting the main new features is available on YouTube and PeerTube peertube.opencloud.lu/w/ibmZUeRgnx9bPXQeYUyXTV.

Please confirm that you want to play a YouTube video. By accepting, you will be accessing content from YouTube, a service provided by an external third party.

YouTube privacy policy

If you accept this notice, your choice will be saved and the page will refresh.

LibreOffice for Enterprise

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: www.libreoffice.org/download/libreoffice-in-business/.

Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and improves the LibreOffice technology platform. Products based on LibreOffice Technology are available for all major desktop operating systems (Windows, macOS, Linux and ChromeOS), mobile platforms (Android and iOS) and the cloud.

The Document Foundation has developed a migration protocol to help companies move from proprietary office suites to LibreOffice, based on the provision of an LTS (long-term support) enterprise-optimised version of LibreOffice, plus migration consulting and training provided by certified professionals who offer value-added solutions that are consistent with proprietary offerings. Reference: www.libreoffice.org/get-help/professional-support/.

In fact, LibreOffice’s mature code base, rich feature set, strong support for open standards, excellent compatibility and LTS options from certified partners make it the ideal solution for organisations looking to regain control of their data and break free from vendor lock-in.

LibreOffice 24.8.1 availability

LibreOffice 24.8.1 is available from www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 (no longer supported by Microsoft) and Apple macOS 10.15. Products based on LibreOffice technology for Android and iOS are listed at www.libreoffice.org/download/android-and-ios/.

LibreOffice users, free software advocates and community members can support The Document Foundation by making a donation at www.libreoffice.org/donate.

Bugs fixed: RC1 and RC2

23 Best Free and Open Source GUI Internet Radio Software [Linux Today]

Here’s our verdict on the best GUI-based internet radio software.

The post 23 Best Free and Open Source GUI Internet Radio Software appeared first on Linux Today.

CachyOS October 2024 Update Brings Enhanced AMD Support [Linux Today]

Arch-based CachyOS’s October ’24 update fixes AMD GPU issues, improve the KDE Wayland session, and upgrades Python and Mesa.

The post CachyOS October 2024 Update Brings Enhanced AMD Support appeared first on Linux Today.

Zato Blog: API Testing in Pure English [Planet Python]

API Testing in Pure English

How to test APIs in pure English

Do you have 20 minutes to learn how to test APIs in pure English, without any programming needed?

Great, the API testing tutorial is here.

Right after you complete it, you'll be able to write API tests as the one below.

Next steps:

➤ Read about how to use Python to build and integrate enterprise APIs that your tests will cover
➤ Python API integration tutorial
Python Integration platform as a Service (iPaaS)
What is an Enterprise Service Bus (ESB)? What is SOA?

09:30

Pine64's Linux-Powered E-Ink Tablet is Making a Return [Slashdot: Linux]

"Pine64 has confirmed that its open-source e-ink tablet is returning," reports the blog OMG Ubuntu: The [10.1-inch e-ink display] PineNote was announced in 2021, building on the success of its non-SBC devices like the PinePhone (and later Pro model), the PineTab, and PineBook devices. Like most of Pine64's devices, software support is largely tackled by the community. But only a small batch of developer units were ever sold, primarily by enthusiasts within the open-source community who had the knowledge and desire to work on getting a modern Linux OS to run on the hardware, and adapt to the e-ink display. That process has taken a while, as Pine64's community bloggers explain: "The PineNote was stuck in a chicken-and-egg situation because of the very high cost of manufacturing the device (ePaper screens are sadly still expensive), and so the risk of manufacturing units that then didn't have a working Linux OS and would not sell was huge." However, the proverbial egg has finally hatched. The PineNote now has a reliable Debian-based OS, developed by Maximilian Weigand. This is described as "not only a bare-bones capable OS but a genuinely daily-usable system that 'just works'" according to the Pine64 blog. ["This is excellent as it also moves the target audience from developers to every day users. You should be able to power on the device and drop into a working Gnome experience."] It is said to use the GNOME desktop plus a handful of extensions designed to ensure the UI adapts to working well with an e-ink display. Software pre-installed includes Xournal++ for note taking, Firefox for web browsing, and Foliate for reading ebooks, among others. [And it even runs Doom...] Existing PineNote owners can download the the new OS image, flash it to their device, and help test it... Touch and stylus input are major selling points of the PineNote, positioning it as a libre alternative to leading e-ink note-taking devices like the Remarkable 2, Onyx BOOX, and Amazon Scribe. "I do not (yet) have a launch date target," according to the blog post, "as behind-the-scenes the Pine Store team are still working on all things production." But the update also links to some blog posts about their free and open source smartwatch PineTime...

Read more of this story at Slashdot.

Ardour 8.8 (Open Source DAW) Drops Fresh Fixes & Features [OMG! Ubuntu!]

Ardour digital audio workstationArdour is one of the most popular and powerful open-source digital audio workstations (DAW) around, and a major new update was recently made available. Now, I can’t profess to be some kind of music-making maestro, though I did spend much of my late teens face-first in FL Studio (formerly Fruity Loops) trying – and failing – to channel my inner Cash Cash (’08 ‘era, before their mainstream genre shift). Ardour 8.8 is the second update to the DAW in 2 weeks because, as the software’s devs explain, “v8.7 […] turned out to have a couple of major issues that required […]

You're reading Ardour 8.8 (Open Source DAW) Drops Fresh Fixes & Features, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

NetworkManager 1.50 Released, Supports veth Config in Terminal UI [OMG! Ubuntu!]

A new version of NetworkManager – used by most Linux distributions (including Ubuntu) to manage wired and wireless network connections – was released this week. NetworkManager 1.50 won’t be included in Ubuntu 24.10 (that ships with v1.48) but I think some of the changes it makes may be worth knowing about all the same. Notably, NetworkManager 1.50 now formally deprecates support for dhclient in favour of its own internal DHCP client. The former is now no longer be built “…unless explicitely (sic) enabled, and will be removed in a future release.” Will this have a major issue? Unlikely; NetworkManager began […]

You're reading NetworkManager 1.50 Released, Supports veth Config in Terminal UI, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Mozilla’s New Logo Looks Even Better Animated [OMG! Ubuntu!]

A few months ago I reported that Mozilla is getting a brand revamp and that it incorporates the non-profit company’s iconic red dinosaur mascot – now I have a bit more info. A reader, Nicolas, recently pointed me to the website of global design agency Jones Knowles Ritchie, who Mozilla hired to update, refine, and revitalise its brand identity. As design agencies go, Jones Knowles Ritchie has considerable cultural cache having worked with major world-famous brands, ranging from Burger King to Budweiser – and now web browser maker Mozilla. Their website has a dedicated page to showcase their work on […]

You're reading Mozilla’s New Logo Looks Even Better Animated, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Pine64’s Linux-Powered E-Ink Tablet is Making a Return [OMG! Ubuntu!]

Pine64 has confirmed that its open-source e-ink tablet is returning. The PineNote was announced in 2021, building on the success of its non-SBC devices like the PinePhone (and later Pro model), the PineTab, and PineBook devices. Like most of Pine64’s devices, software support is largely tackled by the community. But only a small batch of developer units were ever sold, primarily by enthusiasts within the open-source community who had the knowledge and desire to work on getting a modern Linux OS to run on the hardware, and adapt to the e-ink display. That process has taken a while, as Pine64’s […]

You're reading Pine64’s Linux-Powered E-Ink Tablet is Making a Return, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

How to Install MongoDB on AlmaLinux 9 [Linux Today]

MongoDB is an open-source, cross-platform, and distributed NoSQL (Non-SQL or Non-Relational) database system. This guide will show you how to install MongoDB on an Alma Linux 9 server.

The post How to Install MongoDB on AlmaLinux 9 appeared first on Linux Today.

Ardour 8.8 DAW Launches with Hot-Fixes and New Features [Linux Today]

Ardour 8.8 Digital Audio Workstation introduces new features, including track dragging, ruler changes, and powerful parallel disk I/O.

The post Ardour 8.8 DAW Launches with Hot-Fixes and New Features appeared first on Linux Today.

Julien Tayon: Bidirectionnal python/tk by talking to tk interpreter back and forth [Planet Python]

Last time I exposed an old way learned in physical labs to do C or python/tk like in the old days: by summoning a tcl/tk interpreter and piping commands to it.

But what fun is it?

It's funnier if the tcl/tk interperpreter talks back to python :D as an hommage to the 25 years awaited TK9 versions that solves a lot of unicode trouble.

Beforehand, to make sense to the code a little warning is required : this code targets only POSIX environment and loses portability because I chose to use a way that is not the « one best way » for enabling bidirectionnal talks. By using os.set_blocking(p.stdout.fileno(), False) we can have portable non blocking IO, which means this trick has been tested on linux, freeBSD and windows successfully.

First and foremost, the Popen now use p.stdout=PIPE enabling the channel on which tcl will talk. As a joke puts/gets are named from tcl/tk functions and are used in python to push/get strings from tcl.

Instead of using multithreading having one thread listen to the output and putting the events in a local queue that the main thread will consume I chose the funniest technique of setting tcl/tk output non blocking which does not work on windows. This is the fnctl part of the code.

Then, I chose not to parse the output of tcl/tk but exec it, making tcl/tk actually push python commands back to python. That's the exec part of the code.

For this I needed an excuse : so I added buttons to change minutes/hours back and forth.

That's the moment we all are gonna agree that tcl/tk that tcl/tk biggest sin is its default look. Don't worry, next part is about using themes.

Compared to the first post, changes are minimal :D This is how it should look :

And here is the code, largely still below 100 sloc (by 3 lines).
#!/usr/bin/env python
from subprocess import Popen, PIPE
from time import sleep, time, localtime
# import fcntl
import os

# let's talk to tk/tcl directly through p.stdin
p = Popen(['wish'], stdin=PIPE, stdout=PIPE)

# best non portable answer on stackoverflow
#fd = p.stdout.fileno()
#flag = fcntl.fcntl(fd, fcntl.F_GETFL)
#fcntl.fcntl(fd, fcntl.F_SETFL, flag | os.O_NONBLOCK)
# ^-- this 3 lines can be replaced with this one liner --v
# portable non blocking IO
os.set_blocking(p.stdout.fileno(), False)

def puts(s):
    for l in s.split("\n"):
        p.stdin.write((l + "\n").encode())
        p.stdin.flush()

def gets():
    ret=p.stdout.read()
    p.stdout.flush()
    return ret

WIDTH=HEIGHT=400

puts(f"""
canvas .c -width {WIDTH} -height {HEIGHT} -bg white
pack .c
. configure -background white

ttk::button  .ba -command {{  puts ch-=1 }} -text <<
pack .ba -side left   -anchor w
ttk::button .bb -command {{  puts cm-=1 }} -text  <
pack .bb -side left -anchor w
ttk::button .bc -command {{  puts ch+=1 }} -text >> 
pack .bc  -side right -anchor e
ttk::button .bd -command {{  puts cm+=1 }} -text > 
pack .bd  -side right -anchor e
""")

# Constant are CAPitalized in python by convention
from cmath import  pi as PI, e as E
ORIG=complex(WIDTH/2, HEIGHT/2)

# correcting python notations j => I  
I = complex("j")
rad_per_sec = 2.0 * PI /60.0
rad_per_min = rad_per_sec / 60
rad_per_hour = rad_per_min / 12

origin_vector_hand = WIDTH/2 *  I

size_of_sec_hand = .9
size_of_min_hand = .8
size_of_hour_hand = .65

rot_sec = lambda sec : -E ** (I * sec * rad_per_sec )
rot_min = lambda min : -E ** (I *  min * rad_per_min )
rot_hour = lambda hour : -E ** (I * hour * rad_per_hour )

to_real = lambda c1,c2 : "%f %f %f %f" % (c1.real,c1.imag,c2.real, c2.imag)
for n in range(60):
    direction= origin_vector_hand * rot_sec(n)
    start=.9 if n%5 else .85
    puts(f".c create line {to_real(ORIG+start*direction,ORIG+.95*direction)}")
    sleep(.01)

diff_offset_in_sec = (time() % (24*3600)) - \
    localtime()[3]*3600 -localtime()[4] * 60.0 \
    - localtime()[5] 
ch=cm=0
while True:
    # eventually parsing tcl output 
    back = gets()
    # trying is more concise than checking
    try:
        back = back.decode()
        exec(back)
    except Exception as e:
        pass

    t = time()
    s= t%60
    m = m_in_sec = t%(60 * 60) + cm * 60
    h = h_in_sec = (t- diff_offset_in_sec)%(24*60*60) + ch * 3600 + cm * 60
    puts(".c delete second")
    puts(".c delete minute")
    puts(".c delete hour")
    c0=ORIG+ -.1 * origin_vector_hand * rot_sec(s)
    c1=ORIG+ size_of_sec_hand * origin_vector_hand * rot_sec(s)
    puts( f".c create line {to_real(c0,c1)} -tag second -fill blue -smooth true")
    c1=ORIG+size_of_min_hand * origin_vector_hand * rot_min(m)
    puts(f".c create line {to_real(ORIG, c1)} -tag minute -fill green -smooth true")
    c1=ORIG+size_of_hour_hand * origin_vector_hand * rot_hour(h)
    puts(f".c create line {to_real(ORIG,c1)} -tag hour -fill red -smooth true")
    puts("flush stdout")
    sleep(.1)


Some history about this code.



I have been mentored in a physical lab where we where doing the pipe, fork, dup2 dance to tcl/tk from C to give a nice output to our simulations so we could control our instuition was right and could extract pictures for the publications. This is a trick that is almost as new as my arteries.
My mentor used to say : we are not coders, we need stuff to work fast and neither get drowned in computer complexity or endless quest for « the one best way » nor being drowned in bugs, we aim for the Keep It Simple Stupid Ways.

Hence, this is a Keep It Simple Stupid approach that I revived for the sake of seeing if it was still robust after 35 years without using it.

Well, if it's robust and it's working: it ain't stupid even if it isn't the « one best idiomatic way ». :P

Talk Python to Me: #479: Designing Effective Load Tests for Your Python App [Planet Python]

You're about to launch your new app or API, or even just a big refactor of your current project. Will it stand up and deliver when you put it into production or when that big promotion goes live? Or will it wither and collapse? How would you know? Well you would test that of course. We have Anthony Shaw back on the podcast to dive into a wide range of tools and techniques for performance and loading testing of web apps.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/workos'>WorkOS</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Anthony on Twitter</b>: <a href="https://twitter.com/anthonypjshaw?featured_on=talkpython" target="_blank" >@anthonypjshaw</a><br/> <b>Anthony's PyCon Au Talk</b>: <a href="https://www.youtube.com/watch?v=or3PbMGMz4o" target="_blank" >youtube.com</a><br/> <b>locust load testing tool</b>: <a href="https://locust.io?featured_on=talkpython" target="_blank" >locust.io</a><br/> <b>playwright</b>: <a href="https://playwright.dev?featured_on=talkpython" target="_blank" >playwright.dev</a><br/> <b>mimesis</b>: <a href="https://github.com/lk-geimfari/mimesis?featured_on=talkpython" target="_blank" >github.com</a><br/> <b>mimesis providers</b>: <a href="https://mimesis.name/en/master/providers.html?featured_on=talkpython" target="_blank" >mimesis.name</a><br/> <b>vscode pets</b>: <a href="https://marketplace.visualstudio.com/items?itemName=tonybaloney.vscode-pets&featured_on=talkpython" target="_blank" >marketplace.visualstudio.com</a><br/> <b>vscode power-mode</b>: <a href="https://marketplace.visualstudio.com/items?itemName=hoovercj.vscode-power-mode&featured_on=talkpython" target="_blank" >marketplace.visualstudio.com</a><br/> <b>opentelemetry</b>: <a href="https://opentelemetry.io?featured_on=talkpython" target="_blank" >opentelemetry.io</a><br/> <b>uptime-kuma</b>: <a href="https://github.com/louislam/uptime-kuma?featured_on=talkpython" target="_blank" >github.com</a><br/> <b>Talk Python uptime / status</b>: <a href="https://talkpython.fm/status" target="_blank" >talkpython.fm/status</a><br/> <b>when your serverless computing bill goes parabolic...</b>: <a href="https://www.youtube.com/watch?v=SCIfWhAheVw" target="_blank" >youtube.com</a><br/> <b>Watch this episode on YouTube</b>: <a href="https://www.youtube.com/watch?v=W6UVq8zVtxU" target="_blank" >youtube.com</a><br/> <b>Episode transcripts</b>: <a href="https://talkpython.fm/episodes/transcript/479/designing-effective-load-tests-for-your-python-app" target="_blank" >talkpython.fm</a><br/> <br/> <b>--- Stay in touch with us ---</b><br/> <b>Subscribe to us on YouTube</b>: <a href="https://talkpython.fm/youtube" target="_blank" >youtube.com</a><br/> <b>Follow Talk Python on Mastodon</b>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <b>Follow Michael on Mastodon</b>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>

Mariatta: Python Core Sprint 2024: Day 5 [Planet Python]

Python Core Sprint 2024: Day 5

Datetime and Hypothesis

I reviewed some issues that came to the CPython repo. There were a few interesting tickets related to the datetime module. These issues were discovered by Hypothesis, a property-based testing tool for Python. I’ve been hearing a lot about Hypothesis, but never really used it in production or at work. I watched a talk about it at PyCon US many years ago, and I even had ice cream selfie with Zac who maintains Hypothesis. Anyway, I’ve just been interested in learning more about Hypothesis and how it could solve issues not caught by other testing methods, and I think this is one of the perks of contributing to open source: getting exposed to things you don’t normally use at work, and I think it’s a great way to learn new things.

Seth Michael Larson: EuroPython 2024 talks about security [Planet Python]

EuroPython 2024 talks about security

AboutBlogNewsletterLinks

EuroPython 2024 talks about security

Published 2024-10-04 by Seth Larson
Reading time: minutes

EuroPython 2024 which occurred back in July 2024 has published the talk recordings to YouTube earlier this week. I've been under the weather for most of this week, but have had a chance to listen to a few of the security-related talks in-between resting.

Counting down for Cyber Resilience Act: Updates and expectations

This talk was delivered by Python Software Foundation Executive Director Deb Nicholson and and Board Member Cheuk Ting Ho. The Cyber Resilience Act (CRA) is coming, and it'll affect more software than just the software written in the EU. Deb and Cheuk describe the recent developments in the CRA like the creation of a new entity called the "Open Source Steward" and how open source foundations and maintainers are preparing for the CRA.

For the rest of this year and next year I am focusing on getting the Python ecosystem ready for software security regulations like the CRA and SSDF from the United States.

Starting with improving the Software Bill-of-Materials (SBOM) story for Python, because this is required by both (and likely, future) regulations. Knowing what software you are running is an important first step towards being able to secure that same software.

To collaborate with other open source foundations and projects on this work, I've joined the Open Regulatory Compliance Working Group hosted by the Eclipse Foundation.

Towards licensing standardization in Python packaging

This talk was given by Karolina Surma and it detailed all the work that goes into researching, writing, and having a Python packaging standard accepted (spoiler: it's a lot!). Karolina is working on PEP 639 which is for adopting the SPDX licensing expression and identifier standards in Python as they are the current state of the art for modeling complex licensing situations accurately for machine (and human) consumption.

This work is very important for Software Bill-of-Materials, as they require accurate license information in this exact format. Thanks to Karolina, C.A.M. Gerlach, and many others for working for years on this PEP, it will be useful to so many uers once adopted!

The Update Framework (TUF) joins PyPI

This talk was given by Kairo de Araujo and Lukas Pühringer and it detailed the history and current status of The Update Framework (TUF) integration into the Python Package Index.

TUF provides better integrity guarantees for software repositories like PyPI like making it more difficult to "compel" the index to serve the incorrect artifacts and to make a compromise of PyPI easier to roll-back and be certain that files hadn't been modified. For a full history and latest status, you can view PEP 458 and the top-level GitHub issue for Warehouse.

I was around for the original key-signing ceremony for the PyPI TUF root keys which was live-streamed back in October 2020. Time flies, huh.

Writing Python like it's Rust: more robust code with type hints

This talk was given by Jakub Beránek about using type hints for more robust Python code. Having written a case-study on urllib3's adoption of type hints to find defects that testing and other tooling missed I highly recommend type hints for Python code as well:

Accelerating Python with Rust: The PyO3 Revolution

This talk was given by Roshan R Chandar about using PyO3 and Rust in Python modules.

Automatic Trusted Publishing with PyPI

This talk was given by Facundo Tuesca on using Trusted Publishing for authenticating with PyPI to publish packages.

Zero Trust APIs with Python

This talk was given by Jose Haro Peralta on how to design and implement secure web APIs using Python, data validation with Pydantic, and testing your APIs using tooling for detecting common security defects.

Best practices for securely consuming open source in Python

This talk was given by Cira Carey which highlights many of today's threats targetting open source consumers. Users should be aware of these when selecting projects to download and install.

Have thoughts or questions? Let's chat with email or social:

Want more articles like this one? Get notified of new posts by subscribing to the RSS feed or the email newsletter. Promise not to share your email or send you spam.

Find a typo? This blog is open source, pull requests are welcome.

Thanks for reading! ♡ This work is licensed under CC BY-SA 4.0

06-10-2024

11:27

How to Install Odoo 18 on Ubuntu 24.04 [Linux Today]

Odoo 18 is an open-source suite of business applications that provides a complete ERP (Enterprise Resource Planning) solution for organizations of various sizes. It offers a wide range of integrated tools and modules to help manage all aspects of a business, such as finance, sales, inventory, human resources, and more.

The open-source community edition is free, making it accessible to small businesses and developers. The enterprise edition, on the other hand, offers additional features, services, and support.

Odoo is highly customizable. Businesses can tailor modules to meet their specific needs, create custom workflows, or build entirely new apps using Odoo’s development framework.

In summary, Odoo is a versatile business management software that can streamline operations and provide real-time insights, making it an ideal solution for companies looking to optimize their business processes.

In this tutorial, we will show you how to install Odoo 18 on a Ubuntu 24.04 OS.

The post How to Install Odoo 18 on Ubuntu 24.04 appeared first on Linux Today.

Banana Pi and OpenWrt’s One/AP-24.XY Router Board Hits the Market [Linux Today]

The first official OpenWrt One/AP-24.XY router board goes on sale, featuring MediaTek’s latest SoC with WiFi 6 for enhanced connectivity, priced at $111.

The post Banana Pi and OpenWrt’s One/AP-24.XY Router Board Hits the Market appeared first on Linux Today.

Real Python: Quiz: Iterators and Iterables in Python: Run Efficient Iterations [Planet Python]

In this quiz, you’ll test your understanding of Python’s Iterators and Iterables.

By working through this quiz, you’ll revisit how to create and work with iterators and iterables, understand the differences between them, and review how to use generator functions and the yield statement.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

05-10-2024

21:52

Thousands of Linux Systems Infected By Stealthy Malware Since 2021 [Slashdot: Linux]

A sophisticated malware strain has infected thousands of Linux systems since 2021, exploiting over 20,000 common misconfigurations and a critical Apache RocketMQ vulnerability, researchers at Aqua Security reported. Dubbed Perfctl, the malware employs advanced stealth techniques, including rootkit installation and process name mimicry, to evade detection. It persists through system reboots by modifying login scripts and copying itself to multiple disk locations. Perfctl hijacks systems for cryptocurrency mining and proxy services, while also serving as a backdoor for additional malware. Despite some antivirus detection, the malware's ability to restart after removal has frustrated system administrators.

Read more of this story at Slashdot.

How to Install Pydio Cells on AlmaLinux 9 [Linux Today]

Pydio Cells is an open-source document-sharing and collaboration platform for your organization. In this guide, we’ll show you how to install Pydio Cells on an Alma Linux 9 server.

The post How to Install Pydio Cells on AlmaLinux 9 appeared first on Linux Today.

Fwupd 2.0 Open-Source Linux Firmware Updater Released with Major Changes [Linux Today]

This new major release beaks the libfwupd ABI to drop legacy signing formats for verification of metadata and firmware, reduce the runtime memory usage and CPU startup cost significantly, remove all the long-deprecated legacy CLI tools, remove libgusb and GUdev from plugins and use libusb and sysfs instead, and stream firmware binaries over a file descriptor rather than into memory.

The post Fwupd 2.0 Open-Source Linux Firmware Updater Released with Major Changes appeared first on Linux Today.

PyGObject: A Guide to Creating Python GUI Applications on Linux [Linux Today]

Creating graphical user interface (GUI) applications is a fantastic way to bring your ideas to life and make your programs more user-friendly.

PyGObject is a Python library that allows developers to create GUI applications on Linux desktops using the GTK (GIMP Toolkit) framework. GTK is widely used in Linux environments, powering many popular desktop applications like Gedit, GNOME terminal, and more.

In this article, we will explore how to create GUI applications under a Linux desktop environment using PyGObject. We’ll start by understanding what PyGObject is, how to install it, and then proceed to building a simple GUI application.

The post PyGObject: A Guide to Creating Python GUI Applications on Linux appeared first on Linux Today.

Fwupd 2.0: Major Changes and New Hardware Support [Linux Today]

Fwupd 2.0 launches with major enhancements: drops old signing formats, adds Darwin support, and revamps device firmware management.

The post Fwupd 2.0: Major Changes and New Hardware Support appeared first on Linux Today.

Julien Tayon: Simpler than PySimpleGUI and python tkinter: talking directly to tcl/tk [Planet Python]

Well, the PySimpleGUI rug pulling of its licence reminded me how much dependencies are not a good thing.

Even though FreeSimpleGUI is a good approach to simpler tk/tcl binding in python : we can do better, especially if your linux distro split the python package and you don't have access to tkinter. I am watching you debian, splitting ALL packages and breaking them including ... tcl from tk (what a crime).

Under debian this stunt requires you to install tk :

apt install tk8.6


How hard is it when tcl/tk is installed to do GUI programming in tk without tkinter?

Well, it's fairly easy, first and foremost coders are coders, they code in whatever language. If you do code in one language you can't do docker, simple sysadmin tasks (shell), compile C extensions (make syntax) or web applications (HTML + javascript). Hence, learning more than one language is part of doing python applications.

How hard is coding in tcl/tk natively?

Fairly easy: its difficulty is a little above lua, and way below perl thanks to the absence of references.

What value tcl have ?

It's still used in domain specific field such as VLSI (Very Large Scale Integration of electronic component).

So here is the plan : we are gonna do an application that do the math in python which is perfect for expressing complex math in more readable way than tcl and push all the GUI to the tk interpreter (albeit wish).

We are gonna make a simple wall clock ... and all tcl commands are injected to tcl through the puts function.
#!/usr/bin/env python
from subprocess import Popen, PIPE
from time import sleep, time, localtime

# let's talk to tk/tcl directly through p.stdin
p = Popen(['wish'], stdin=PIPE)

def puts(s):
    for l in s.split("\n"):
        p.stdin.write((l + "\n").encode())
        p.stdin.flush()

WIDTH=HEIGHT=400

puts(f"""
canvas .c -width {WIDTH} -height {HEIGHT} -bg white
pack .c
. configure -background "white"
""")

# Constant are CAPitalized in python by convention
from cmath import  pi as PI, e as E
ORIG=complex(WIDTH/2, HEIGHT/2)

# correcting python notations j => I  
I = complex("j")
rad_per_sec = 2.0 * PI /60.0
rad_per_min = rad_per_sec / 60
rad_per_hour = rad_per_min / 12

origin_vector_hand = WIDTH/2 *  I

size_of_sec_hand = .9
size_of_min_hand = .8
size_of_hour_hand = .65

rot_sec = lambda sec : -E ** (I * sec * rad_per_sec )
rot_min = lambda min : -E ** (I *  min * rad_per_min )
rot_hour = lambda hour : -E ** (I * hour * rad_per_hour )

to_real = lambda c1,c2 : "%f %f %f %f" % (c1.real,c1.imag,c2.real, c2.imag)
for n in range(60):
    direction= origin_vector_hand * rot_sec(n)
    start=.9 if n%5 else .85
    puts(f".c create line {to_real(ORIG+start*direction,ORIG+.95*direction)}")
    sleep(.1)

diff_offset_in_sec = (time() % (24*3600)) - \
    localtime()[3]*3600 -localtime()[4] * 60.0 \
    - localtime()[5] 

while True:
    t = time()
    s= t%60
    m = m_in_sec = t%(60 * 60)
    h = h_in_sec = (t- diff_offset_in_sec)%(24*60*60)
    puts(".c delete second")
    puts(".c delete minute")
    puts(".c delete hour")
    c0=ORIG+ -.1 * origin_vector_hand * rot_sec(s)
    c1=ORIG+ size_of_sec_hand * origin_vector_hand * rot_sec(s)
    puts( f".c create line {to_real(c0,c1)} -tag second -fill blue -smooth true")
    c1=ORIG+size_of_min_hand * origin_vector_hand * rot_min(m)
    puts(f".c create line {to_real(ORIG, c1)} -tag minute -fill green -smooth true")
    c1=ORIG+size_of_hour_hand * origin_vector_hand * rot_hour(h)
    puts(f".c create line {to_real(ORIG,c1)} -tag hour -fill red -smooth true")
    sleep(.1)

Next time as a bonus, I'm gonna do something tkinter cannot do: bidirectional communications (REP/REQ pattern).

Julien Tayon: PySimpleGUI : surviving the rug pull of licence part I [Planet Python]

I liked pySimpleGUI, because as a coder that likes tkinter (the Tk/Tcl bindings) and as a former tcl/tk coder I enoyed the syntaxic sugar that was avoiding all the boiler plates required to build the application.

The main advantage was about not having to remember in wich order to make the pack and having to do the mainloop call. It was not a revolution, just a simple, elegant evolution, hence I was still feeling in control.

However, the projet made a jerk move by relicensing in full proprietary license that requires a key to work functionnaly.

I will not discuss this since the point have been made clearly on python mailing list.

Luckily I want to raise 2 points :

  • we have been numerous coders to fork the project for doing pull requests
  • it higlights once more the danger of too much dependencies


If you have a working copy of the repository



Well, you can still install a past version of pysimpleGUI, but unless you can do
pip install  git+https://github.com/jul/PySimpleGUI#egg=pysimpleGUI


Pro: if that version suited you, your old code will work
Con: there will be no update for the bugs and it is pretty much a no-go.

Expect free alternative



One of the power of free software is the power to fork, and some coders already forked in a « free forever » version of pysimpleGUI.
One of this fork is : Free Simple GUI.

Pro: migration is as simple as :

pip install FreeSimpleGUI
and then in the source :
- import PySimpleGUI as sg
+ import FreeSimpleGUI as sg

Con: a project is as useful as the capacity of the community to keep up with the job of solving issues and the strength of the community to follow up.

Migrating to tkinter



This will be covered in the week to come by translating some examples to real life migration by myself.

Since tkinter has improved a lot and his a pillar of python distribution when it is not broken by debian, it is fairly easy.

Pro: diminises the need for a dependency and empower you with the poweful concept of tk such as variables binded to a widget (an observer pattern).
Con: PySimpleGUI is a multi-target adaptor to not only tkinter but also remi (web), wx, Qt. If you were using the versatility of pysimpleGUI and its multi-platform features you really need to look at the « free forever » alternatives.

Python Engineering at Microsoft: Python in Visual Studio Code – October 2024 Release [Planet Python]

We’re excited to announce the October 2024 release of the Python and Jupyter extensions for Visual Studio Code!

This release includes the following announcements:

  • Run Python tests with coverage
  • Default Python problem matcher
  • Python language server mode

If you’re interested, you can check the full list of improvements in our changelogs for the Python, Jupyter and Pylance extensions.

Run Python tests with coverage

You can now run Python tests with coverage in VS Code! Test coverage is a measure of how much of your code is covered by your tests, which can help you identify areas of your code that are not being fully tested.

To run tests with coverage enabled, select the coverage run icon in the Test Explorer or the “Run with coverage” option from any menu you normally trigger test runs from. The Python extension will run coverage using the pytest-cov plugin if you are using pytest, or with coverage.py for unittest.

Note: Before running tests with coverage, make sure to install the correct testing coverage package for your project.

Once the coverage run is complete, lines will be highlighted in the editor for line level coverage. Test coverage results will appear as a “Test Coverage” sub-tab in the Test Explorer, which you can also navigate to with Testing: Focus on Test Coverage View in Command Palette (F1)). On this panel you can view line coverage metrics for each file and folder in your workspace.

Gif showing Python tests running with coverage.

For more information on running Python tests with coverage, see our Python test coverage documentation. For general information on test coverage, see VS Code’s Test Coverage documentation.

Default Python problem matcher

We are excited to announce support for one of our longest request features: there is now a default Python problem matcher! Aiming to simplifying issue tracking in your Python code and offering more contextual feedback, a problem matcher scans the task’s output for errors and warnings and displays them in the Problems panel, enhancing your development workflow. To integrate it, add "problemMatcher": "$python" to your tasks in task.json.

Below is an example of a task.json file that uses the default problem matcher for Python:

{
    "version": "2.0.0",
    "tasks": [
        {
            "label": "Run Python",
            "type": "shell",
            "command": "${command:python.interpreterPath}",
            "args": [
                "${file}"
            ],
            "problemMatcher": "$python"
        }
    ]
}

For more information on tasks and problem matchers, visit VS Code’s Tasks documentation.

Pylance language server mode

There’s a new setting python.analysis.languageServerMode that enables you to choose between our current IntelliSense experience or a lightweight one that is optimized for performance. If you don’t require the full breadth of IntelliSense capabilities and prefer Pylance to be as resource-friendly as possible, you can set python.analysis.languageServerMode to light. Otherwise, to continue with the experience you have with Pylance today, you can leave out the setting entirely or explicitly set it to default .

This new functionality overrides the default values of the following settings:

Setting light mode default mode
“python.analysis.exclude” [“**”] []
“python.analysis.useLibraryCodeForTypes” false true
“python.analysis.enablePytestSupport” false true
“python.analysis.indexing” false true

The settings above can still be changed individually to override the default values.

Shell integration in Python terminal REPL

The Python extension now includes a python.terminal.shellIntegration.enabled setting to enable a better terminal experience on MacOS and Linux machines. When enabled, this setting runs a PYTHONSTARTUP script before you launch the Python REPL in the terminal (for example, by typing and entering python), allowing you to leverage terminal shell integrations such as command decorations, re-run command and run recent commands.

Gif show shell integration enabled in the terminal.

Other Changes and Enhancements

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python and Jupyter Notebooks in Visual Studio Code. Some notable changes include:

  • Experimental Implement Abstract Classes with Copilot Code Action available for GitHub Copilot users using Pylance. Enable by adding "python.analysis.aiCodeActions": {"implementAbstractClasses": true} in your User settings.json
  • Fixed duplicate Python executable code when sending code to the Terminal REPL by using executeCommand rather than sendText for the activation command in @vscode#23929

We would also like to extend special thanks to this month’s contributors:

Try out these new improvements by downloading the Python extension and the Jupyter extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or ⌘ + ⇧ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.

The post Python in Visual Studio Code – October 2024 Release appeared first on Python.

Real Python: Quiz: Python import: Advanced Techniques and Tips [Planet Python]

In this quiz, you’ll test your understanding of Python’s import statement and related topics.

By working through this quiz, you’ll revisit how to use modules in your scripts and import modules dynamically at runtime.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Chris Rose: uv, direnv, and simple .envrc files [Planet Python]

I have adopted uv for a lot of Python development. I'm also a heavy user of direnv, which I like as a tool for setting up project-specific environments.

Much like Hynek describes, I've found uv sync to be fast enough to put into the chdir path for new directories. Here's how I'm doing it.

Direnv Libraries

First, it turns out you can pretty easily define custom direnv functions like the built-in ones (layout python, etc...). You do this by adding functions to ~/.config/direnv/direnvrc or in ~/.config/direnv/lib/ as shell scripts. I use this extensively to make my .envrc files easier to maintain and smaller. Now that I'm using uv here is my default for python:

function use_standard-python() {
    source_up_if_exists

    dotenv_if_exists

    source_env_if_exists .envrc.local

    use venv

    uv sync
}

What does that even mean?

Let me explain each of these commands and why they are there:

  • source_up_if_exists -- this direnv stdlib function is here because I often group my projects into directories with common configuration. For example, when working on Chicon 8, I had a top level .envrc that set up the AWS configuration to support deploying Wellington and the Chicon 8 website. This searches up til it finds a .envrc in a higher directory, and uses that. source_up is the noisier, less-adaptable sibling.

  • dotenv_if_exists -- this loads .env from the current working directory. 12-factor apps often have environment-driven configuration, and docker compose uses them relatively seamlessly as well. Doing this makes it easier to run commands from my shell that behave like my development environment.

  • source_env_if_exists .envrc.local -- sometimes you need more complex functionality in a project than just environment variables. Having this here lets me use .envrc.local for that. This comes after .env because sometimes you want to change those values.

  • use venv -- this is a function that activates the project .venv (creating it if needed); I'm old and set in my ways, and I prefer . .venv/bin/activate.fish in my shell to the more newfangled "prefix it with a runner" mode.

  • uv sync -- this is a super fast, "install my development and main dependencies" command. This was way, way too slow with pip, pip-tools, poetry, pdm, or hatch, but with uv, I don't mind having this in my .envrc

Using it in a sentence

With this set up in direnv's configuration, all I need in my .envrc file is this:

use standard-python

I've been using this pattern for a while now; it lets me upgrade how I do default Python setups, with project specific settings, easily.

04-10-2024

17:53

Parabolic (Video Downloader) Rewritten in C++, Adjusts UI [OMG! Ubuntu!]

There are plenty of ways to download videos from well-known video streaming sites on Ubuntu but I find Parabolic the easiest, least hassle option out there. For those yet to hear about it, Parabolic is a GTK4/libadwaita app for Linux (or a Qt one for Windows) that offers what it describes as a ‘basic frontend’ to yt-dlp. All sites supported by yt-dlp are supported in this app. Paste in a URL, validate, and download. Parabolic lets you download multiple videos simultaneously and save them to popular video or audio formats; sign-in with account details (if needed) and see the credentials to […]

You're reading Parabolic (Video Downloader) Rewritten in C++, Adjusts UI, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

09:17

Audacious 4.4.1 Released with Assorted Minor Improvements [OMG! Ubuntu!]

A chorus of improvements are on offer in the newest update to the popular open source, cross-platform Audacious music player. Audacious 4.4.1 builds on the changes introduced in Audacious 4.4 (a release that brought GTK3 and Qt6 UI choices, the return of a dedicated lyrics plugin, and better compatibility with PipeWire) rather than adding any huge new features of its own. But that’s no bad thing; finesse, fix ’em ups, and extended support for existing features are as welcome as gaudy new GUI elements to me. Notable changes include: The change-log also says the PulseAudio plugin is now preferred over […]

You're reading Audacious 4.4.1 Released with Assorted Minor Improvements, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Trey Hunner: Switching from virtualenvwrapper to direnv, Starship, and uv [Planet Python]

Earlier this week I considered whether I should finally switch away from virtualenvwrapper to using local .venv managed by direnv.

I’ve never seriously used direnv, but I’ve been hearing Jeff and Hynek talk about their use of direnv for a while.

After a few days, I’ve finally stumbled into a setup that works great for me. I’d like to note the basics of this setup as well as some fancy additions that are specific to my own use case.

My old virtualenvwrapper workflow

First, I’d like to note my old workflow that I’m trying to roughly recreate:

  1. I type mkvenv3 <project_name> to create a new virtual environment for the current project directory and activate it
  2. I type workon <project_name> when I want to workon that project: this activates the correct virtual environment and changes to the project directory

The initial setup I thought of allows me to:

  1. Run echo layout python > .envrc && direnv allow to create a virtual environment for the current project and activate it
  2. Change directories into the project directory to automatically activate the virtual environment

The more complex setup I eventually settled on allows me to:

  1. Run venv <project_name> to create a virtual environment for the current project and activate it
  2. Run workon <project_name> to change directories into the project (which automatically activates the virtual environment)

The initial setup

First, I installed direnv and added this to my ~/.zshrc file:

1
eval "$(direnv hook zsh)"

Then whenever I wanted to create a virtual environment for a new project I created a .envrc file in that directory, which looked like this:

1
layout python

Then I ran direnv allow to allow, as direnv instructed me to, to allow the new virtual environment to be automatically created and activated.

That’s pretty much it.

Unfortunately, I did not like this initial setup.

No shell prompt?

The first problem was that the virtual environment’s prompt didn’t show up in my shell prompt. This is due to a direnv not allowing modification of the PS1 shell prompt. That means I’d need to modify my shell configuration to show the correct virtual environment name myself.

So I added this to my ~/.zshrc file to show the virtual environment name at the beginning of my prompt:

1
2
3
4
5
6
7
# Add direnv-activated venv to prompt
show_virtual_env() {
  if [[ -n "$VIRTUAL_ENV_PROMPT" && -n "$DIRENV_DIR" ]]; then
    echo "($(basename $VIRTUAL_ENV_PROMPT)) "
  fi
}
PS1='$(show_virtual_env)'$PS1

Wrong virtual environment directory

The next problem was that the virtual environment was placed in .direnv/python3.12. I wanted each virtual environment to be in a .venv directory instead.

To do that, I made a .config/direnv/direnvrc file that customized the python layout:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
layout_python() {
    if [[ -d ".venv" ]]; then
        VIRTUAL_ENV="$(pwd)/.venv"
    fi

    if [[ -z $VIRTUAL_ENV || ! -d $VIRTUAL_ENV ]]; then
        log_status "No virtual environment exists. Executing \`python -m venv .venv\`."
        python -m venv .venv
        VIRTUAL_ENV="$(pwd)/.venv"
    fi

    # Activate the virtual environment
    . $VIRTUAL_ENV/bin/activate
}

Loading, unloading, loading, unloading…

I also didn’t like the loading and unloading messages that showed up each time I changed directories. I removed those by clearing the DIRENV_LOG_FORMAT variable in my ~/.zshrc configuration:

1
export DIRENV_LOG_FORMAT=

The more advanced setup

I don’t like it when all my virtual environment prompts show up as .venv. I want ever prompt to be the name of the actual project… which is usually the directory name.

I also really wanted to be able to type venv to create a new virtual environment, activate it, and create the .envrc file for my automatically.

Additionally, I thought it would be really handy if I could type workon <project_name> to change directories to a specific project.

I made two aliases in my ~/.zshrc configuration for all of this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
venv() {
    local venv_name=${1:-$(basename "$PWD")}
    local projects_file="$HOME/.projects"

    # Check if .envrc already exists
    if [ -f .envrc ]; then
        echo "Error: .envrc already exists" >&2
        return 1
    fi

    # Create venv
    if ! python3 -m venv --prompt "$venv_name"; then
        echo "Error: Failed to create venv" >&2
        return 1
    fi

    # Create .envrc
    echo "layout python" > .envrc

    # Append project name and directory to projects file
    echo "${venv_name} = ${PWD}" >> $projects_file

    # Allow direnv to immediately activate the virtual environment
    direnv allow
}

workon() {
    local project_name="$1"
    local projects_file="$HOME/.projects"
    local project_dir

    # Check for projects config file
    if [[ ! -f "$projects_file" ]]; then
        echo "Error: $projects_file not found" >&2
        return 1
    fi

    # Get the project directory for the given project name
    project_dir=$(grep -E "^$project_name\s*=" "$projects_file" | sed 's/^[^=]*=\s*//')

    # Ensure a project directory was found
    if [[ -z "$project_dir" ]]; then
        echo "Error: Project '$project_name' not found in $projects_file" >&2
        return 1
    fi

    # Ensure the project directory exists
    if [[ ! -d "$project_dir" ]]; then
        echo "Error: Directory $project_dir does not exist" >&2
        return 1
    fi

    # Change directories
    cd "$project_dir"
}

Now I can type this to create a .venv virtual environment in my current directory, which has a prompt named after the current directory, activate it, and create a .envrc file which will automatically activate that virtual environment (thanks to that ~/.config/direnv/direnvrc file) whenever I change into that directory:

1
$ venv

If I wanted to customized the prompt name for the virtual environment, I could do this:

1
$ venv my_project

When I wanted to start working on that project later, I can either change into that directory or if I’m feeling lazy I can simply type:

1
$ workon my_project

That reads from my ~/.projects file to look up the project directory to switch to.

Switching to uv

I also decided to try using uv for all of this, since it’s faster at creating virtual environments. One benefit of uv is that it tries to select the correct Python version for the project, if it sees a version noted in a pyproject.toml file.

Another benefit of using uv, is that I should also be able to update the venv to use a specific version of Python with something like --python 3.12.

Here are the updated shell aliases for the ~/.zshrc for uv:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
venv() {
    local venv_name
    local dir_name=$(basename "$PWD")

    # If there are no arguments or the last argument starts with a dash, use dir_name
    if [ $# -eq 0 ] || [[ "${!#}" == -* ]]; then
        venv_name="$dir_name"
    else
        venv_name="${!#}"
        set -- "${@:1:$#-1}"
    fi

    # Check if .envrc already exists
    if [ -f .envrc ]; then
        echo "Error: .envrc already exists" >&2
        return 1
    fi

    # Create venv using uv with all passed arguments
    if ! uv venv --seed --prompt "$@" "$venv_name"; then
        echo "Error: Failed to create venv" >&2
        return 1
    fi

    # Create .envrc
    echo "layout python" > .envrc

    # Append to ~/.projects
    echo "${venv_name} = ${PWD}" >> ~/.projects

    # Allow direnv to immediately activate the virtual environment
    direnv allow
}

Switching to starship

I also decided to try out using Starship to customize my shell this week.

I added this to my ~/.zshrc:

1
eval "$(starship init zsh)"

And removed this, which is no longer needed since Starship will be managing the shell for me:

1
2
3
4
5
6
7
# Add direnv-activated venv to prompt
show_virtual_env() {
  if [[ -n "$VIRTUAL_ENV_PROMPT" && -n "$DIRENV_DIR" ]]; then
    echo "($(basename $VIRTUAL_ENV_PROMPT)) "
  fi
}
PS1='$(show_virtual_env)'$PS1

I also switched my python layout for direnv to just set the $VIRTUAL_ENV variable and add the $VIRTUAL_ENV/bin directory to my PATH, since the $VIRTUAL_ENV_PROMPT variable isn’t needed for Starship to pick up the prompt:

1
2
3
4
5
layout_python() {
    VIRTUAL_ENV="$(pwd)/.venv"
    PATH_add "$VIRTUAL_ENV/bin"
    export VIRTUAL_ENV
}

I also made a very boring Starship configuration in ~/.config/starship.toml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
format = """
$python\
$directory\
$git_branch\
$git_state\
$character"""

add_newline = false

[python]
format = '([(\($virtualenv\) )]($style))'
style = "bright-black"

[directory]
style = "bright-blue"

[character]
success_symbol = "[\\$](black)"
error_symbol = "[\\$](bright-red)"
vimcmd_symbol = "[❮](green)"

[git_branch]
format = "[$symbol$branch]($style) "
style = "bright-purple"

[git_state]
format = '\([$state( $progress_current/$progress_total)]($style)\) '
style = "purple"

[cmd_duration.disabled]

I setup such a boring configuration because when I’m teaching, I don’t want my students to be confused or distracted by a prompt that has considerably more information in it than their default prompt may have.

The biggest downside of switching to Starship has been my own earworm-oriented brain. As I update my Starship configuration files, I’ve repeatedly heard David Bowie singing “I’m a Starmaaan”. 🎶

Ground control to major TOML

After all of that, I realized that I could additionally use different Starship configurations for different directories by putting a STARSHIP_CONFIG variable in specific layouts. After that realization, I made my configuration even more vanilla and made some alternative configurations in my ~/.config/direnv/direnvrc file:

1
2
3
4
5
6
7
8
9
10
11
12
layout_python() {
    VIRTUAL_ENV="$(pwd)/.venv"

    PATH_add "$VIRTUAL_ENV/bin"
    export VIRTUAL_ENV

    export STARSHIP_CONFIG=/home/trey/.config/starship/python.toml
}

layout_git() {
    export STARSHIP_CONFIG=/home/trey/.config/starship/git.toml
}

Those other two configuration files are fancier, as I have no concern about them distracting my students since I’ll never be within those directories while teaching.

You can find those files in my dotfiles repository.

The necessary tools

So I replaced virtualenvwrapper with direnv, uv, and Starship. Though direnv was is doing most of the important work here. The use of uv and Starship were just bonuses.

I am also hoping to eventually replace my pipx use with uv and once uv supports adding python3.x commands to my PATH, I may replace my use of pyenv with uv as well.

Thanks to all who participated in my Mastodon thread as I fumbled through discovering this setup.

Tryton News: Security Release for issue #93 [Planet Python]

Cédric Krier has found that python-sql does not escape non-Expression for unary operators (like And and Or) which makes any system exposing those vulnerable to an SQL injection attack.

Impact

CVSS v3.0 Base Score: 9.1

  • Attack Vector: Network
  • Attack Complexity: Low
  • Privileges Required: Low
  • User Interaction: None
  • Scope: Changed
  • Confidentiality: High
  • Integrity: Low
  • Availability: Low

Workaround

There is no known workaround.

Resolution

All affected users should upgrade python-sql to the latest version.

Affected versions: <= 1.5.1
Non affected versions: >= 1.5.2

Reference

Concerns?

Any security concerns should be reported on the bug-tracker at https://bugs.tryton.org/python-sql with the confidential checkbox checked.

1 post - 1 participant

Read full topic

How to Set Up a Debian Development Environment [Linux Journal - The Original Magazine of the Linux Community]

How to Set Up a Debian Development Environment

Setting up a development environment is a crucial step for any programmer or software developer. Whether you’re building web applications, developing software, or diving into system programming, having a well-configured environment can make all the difference in your productivity and the quality of your work. This article aims to guide you through the process of setting up a Debian development environment, leveraging the stability and versatility that Debian offers.

Introduction

Debian is renowned for its stability, security, and vast software repositories, making it a favored choice for developers. This guide will walk you through the steps of setting up a Debian development environment, covering everything from installation to configuring essential tools and programming languages. By the end, you’ll have a robust setup ready for your next project.

Prerequisites

System Requirements

Before you begin, ensure that your hardware meets the following minimum specifications:

  • Processor: 1 GHz or faster
  • RAM: At least 1 GB (2 GB or more recommended)
  • Disk Space: A minimum of 10 GB for the operating system and development tools
Software Requirements
  1. Debian Installation Media: You'll need the ISO file of the Debian distribution, which you can download from the official Debian website.

  2. Basic Understanding of the Linux Command Line: Familiarity with command-line operations will be beneficial, as many steps will involve terminal commands.

Installing Debian

Downloading the Debian ISO

Navigate to the Debian download page and choose the version that suits your needs. The Stable version is recommended for most users due to its reliability.

Creating a Bootable USB

To install Debian, you will need to create a bootable USB drive. Here are some tools you can use:

  • Rufus (Windows)
  • balenaEtcher (Cross-platform)
  • dd command (Linux)

To create the USB, follow these steps using balenaEtcher as an example:

  1. Download and install balenaEtcher.
  2. Insert your USB drive (ensure it’s backed up, as this will erase all data).
  3. Open balenaEtcher, select the downloaded Debian ISO, choose the USB drive, and click "Flash."
Installation Process
  1. Booting from USB: Restart your computer and boot from the USB drive. This typically involves pressing a key like F2, F12, or Del during startup to access the boot menu.

Exploring Network Dynamics with NetworkX on Linux [Linux Journal - The Original Magazine of the Linux Community]

Exploring Network Dynamics with NetworkX on Linux

Introduction

In the age of data, understanding complex relationships within networks—ranging from social interactions to infrastructure systems—is more crucial than ever. Network analysis provides a set of techniques and tools for exploring these relationships, offering insights into the structure and dynamics of various systems. Among the myriad tools available, NetworkX emerges as a powerful Python library designed to handle these intricate analyses with ease, especially when run on robust platforms like Linux. This article explores how to effectively use NetworkX for network analysis on a Linux environment, providing both foundational knowledge and practical applications.

Setting Up the Environment

Before diving into the world of network analysis, it’s essential to set up a conducive environment on a Linux system. Here’s a step-by-step guide to getting started:

  1. Installing Linux: If you don’t have Linux installed, Ubuntu is a recommended distribution for beginners due to its user-friendly interface and extensive community support. You can download it from the official Ubuntu website and follow the installation guide to set it up on your machine.

  2. Setting up Python and Pip: Most Linux distributions come with Python pre-installed. You can verify this by running python3 --version in your terminal. If it’s not installed, you can install Python using your distribution’s package manager (e.g., sudo apt install python3). Next, install pip, Python’s package manager, by running sudo apt install python3-pip.

  3. Installing NetworkX: With Python and pip ready, install NetworkX by running pip3 install networkx. Optionally, install Matplotlib for visualizing networks (pip3 install matplotlib).

Fundamentals of Network Analysis

Network analysis operates on networks, which are structures consisting of nodes (or vertices) connected by edges (or links). Here’s a breakdown of key concepts:

03-10-2024

18:43

Mozilla Firefox 131 Brings Tab Hover Previews, URL Fragments + More [OMG! Ubuntu!]

Mozilla Firefox 131 is now available to download with a small set of improvements in tow. The first change I noticed when opening Firefox 131 is the new icon for the ‘all tabs’ feature1. Previously a small downward pointing arrow, this new—more obvious— icon is a small squarish depiction of a tabbed web browser. The change was made ahead of vertical tabs (upcoming feature) that moves this button to the toolbar if vertical tabs are enabled. Mozilla say “hovering the mouse over an unfocused tab will now display a visual preview of its contents”. These visual tab hover previews were […]

You're reading Mozilla Firefox 131 Brings Tab Hover Previews, URL Fragments + More, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Raspberry Pi’s New $70 AI Camera Works With All Pi Models [OMG! Ubuntu!]

If you’re looking to kick the tyres on AI image processing/recognition projects and own an older Raspberry Pi mode, the company’s new ‘AI Camera’ add-on will be of interest. Where the $70 Raspberry Pi AI Kit announced in June only works with a Raspberry Pi 5, the new $70 AI camera works with all Raspberry Pi boards that have the relevant camera connector port (spoiler: most, including the Raspberry Pi Zero and Raspberry Pi 400). This new AI Camera is the latest fruit from Raspberry Pi’s ongoing partnership with Sony Semiconductor Solutions, making use of the latter outfit’s IMX500 image […]

You're reading Raspberry Pi’s New $70 AI Camera Works With All Pi Models, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Real Python: A Guide to Modern Python String Formatting Tools [Planet Python]

When working with strings in Python, you may need to interpolate values into your string and format these values to create new strings dynamically. In modern Python, you have f-strings and the .format() method to approach the tasks of interpolating and formatting strings.

In this tutorial, you’ll learn how to:

  • Use f-strings and the .format() method for string interpolation
  • Format the interpolated values using replacement fields
  • Create custom format specifiers to format your strings

To get the most out of this tutorial, you should know the basics of Python programming and the string data type.

Get Your Code: Click here to download the free sample code that shows you how to use modern string formatting tools in Python.

Take the Quiz: Test your knowledge with our interactive “A Guide to Modern Python String Formatting Tools” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

A Guide to Modern Python String Formatting Tools

You can take this quiz to test your understanding of modern tools for string formatting in Python. These tools include f-strings and the .format() method.

Getting to Know String Interpolation and Formatting in Python

Python has developed different string interpolation and formatting tools over the years. If you’re getting started with Python and looking for a quick way to format your strings, then you should use Python’s f-strings.

Note: To learn more about string interpolation, check out the String Interpolation in Python: Exploring Available Tools tutorial.

If you need to work with older versions of Python or legacy code, then it’s a good idea to learn about the other formatting tools, such as the .format() method.

In this tutorial, you’ll learn how to format your strings using f-strings and the .format() method. You’ll start with f-strings to kick things off, which are quite popular in modern Python code.

Using F-Strings for String Interpolation

Python has a string formatting tool called f-strings, which stands for formatted string literals. F-strings are string literals that you can create by prepending an f or F to the literal. They allow you to do string interpolation and formatting by inserting variables or expressions directly into the literal.

Creating F-String Literals

Here you’ll take a look at how you can create an f-string by prepending the string literal with an f or F:

Python
   👇
>>> f"Hello, Pythonista!"
'Hello, Pythonista!'

   👇
>>> F"Hello, Pythonista!"
'Hello, Pythonista!'
Copied!

Using either f or F has the same effect. However, it’s a more common practice to use a lowercase f to create f-strings.

Just like with regular string literals, you can use single, double, or triple quotes to define an f-string:

Python
    👇
>>> f'Single-line f-string with single quotes'
'Single-line f-string with single quotes'

    👇
>>> f"Single-line f-string with double quotes"
'Single-line f-string with single quotes'
     👇
>>> f'''Multiline triple-quoted f-string
... with single quotes'''
'Multiline triple-quoted f-string\nwith single quotes'

     👇
>>> f"""Multiline triple-quoted f-string
... with double quotes"""
'Multiline triple-quoted f-string\nwith double quotes'
Copied!

Up to this point, your f-strings look pretty much the same as regular strings. However, if you create f-strings like those in the examples above, you’ll get complaints from your code linter if you have one.

The remarkable feature of f-strings is that you can embed Python variables or expressions directly inside them. To insert the variable or expression, you must use a replacement field, which you create using a pair of curly braces.

Interpolating Variables Into F-Strings

The variable that you insert in a replacement field is evaluated and converted to its string representation. The result is interpolated into the original string at the replacement field’s location:

Python
>>> site = "Real Python"

                   👇
>>> f"Welcome to {site}!"
'Welcome to Real Python!'
Copied!

In this example, you’ve interpolated the site variable into your string. Note that Python treats anything outside the curly braces as a regular string.

Read the full article at https://realpython.com/python-formatted-output/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Kushal Das: Thank you Gnome Nautilus scripts [Planet Python]

As I upload photos to various services, I generally resize them as required based on portrait or landscape mode. I used to do that for all the photos in a directory and then pick which ones to use. But, I wanted to do it selectively, open the photos in Gnome Nautilus (Files) application and right click and resize the ones I want.

This week I noticed that I can do that with scripts. Those can be in any given language, the selected files will be passed as command line arguments, or full paths will be there in an environment variable NAUTILUS_SCRIPT_SELECTED_FILE_PATHS joined via newline character.

To add any script to the right click menu, you just need to place them in ~/.local/share/nautilus/scripts/ directory. They will show up in the right click menu for scripts.

right click menu

Below is the script I am using to reduce image sizes:

#!/usr/bin/env python3
import os
import sys
import subprocess
from PIL import Image

# paths = os.environ.get("NAUTILUS_SCRIPT_SELECTED_FILE_PATHS", "").split("\n")

paths = sys.argv[1:]

for fpath in paths:
    if fpath.endswith(".jpg") or fpath.endswith(".jpeg"):
        # Assume that is a photo
        try:
            img = Image.open(fpath)
            # basename = os.path.basename(fpath)
            basename = fpath
            name, extension = os.path.splitext(basename)
            new_name = f"{name}_ac{extension}"
            w, h = img.size
            # If w > h then it is a landscape photo
            if w > h:
                subprocess.check_call(["/usr/bin/magick", basename, "-resize", "1024x686", new_name])
            else: # It is a portrait photo
                subprocess.check_call(["/usr/bin/magick", basename, "-resize", "686x1024", new_name])
        except:
            # Don't care, continue
            pass

You can see it in action (I selected the photos and right clicked, but the recording missed that part):

right click on selected photos

Real Python: Quiz: A Guide to Modern Python String Formatting Tools [Planet Python]

Test your understanding of Python’s tools for string formatting, including f-strings and the .format() method.

Take this quiz after reading our A Guide to Modern Python String Formatting Tools tutorial.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Python Software Foundation: Python 3.13 and the Latest Trends: A Developer's Guide to 2025 - Live Stream Event [Planet Python]

Join Tania Allard, PSF Board Member, and Łukasz Langa, CPython Developer-in-Residence, for ‘Python 3.13 and the Latest Trends: A Developer’s Guide to 2025’, a live stream event hosted by Paul Everitt from JetBrains. Thank to JetBrains for partnering with us on the Python Developers Survey and this event to highlight the current state of Python!

The session will take place tomorrow, October 3, at 5:00 pm CEST (11:00 am EDT). Tania and Łukasz will be discussing the exciting new features in Python 3.13, plans for Python 3.15 and current Python trends gathered from the 2023 Annual Developers Survey. Don't miss this chance to hear directly from the experts behind Python’s development!

Watch the live stream event on YouTube

Don’t forget to enable YouTube notifications for the stream and mark your calendar.

PyCharm: Prompt AI Directly in the Editor [Planet Python]

With PyCharm, you now have the support of AI Assistant at your fingertips. You can interact with it right where you do most of your work – in the editor. 

Stuck with an error in your code? Need to add documentation or tests? Just start typing your request on a new line in the editor, just as if you were typing in the AI Assistant chat window. PyCharm will automatically recognize your natural language request and generate a response.  

PyCharm leaves a purple mark in the gutter next to lines changed by AI Assistant so you can easily see what has been updated. 

If you don’t like the initial suggestion, you can generate a new one by pressing Tab. You can also adjust the initial input by clicking on the purple block in the gutter or simply pressing Ctrl+/ or /.

Want to get assistance with a specific argument? You can narrow the context that AI Assistant uses for its response as much as you want. Just put the caret in the relevant context, type the $ or ? symbol, and start writing. PyCharm will recognize your prompt and take the current context into account for its suggestions. 

The new inline AI assistance works for Python, JavaScript, TypeScript, JSON, and YAML file formats, while the option to narrow the context works only for Python so far.

This feature is available to all AI Assistant subscribers in the second PyCharm 2024.3 EAP build. You can get a free trial version of AI Assistant straight in the IDE: to enable AI Assistant, open a project in PyCharm, click the AI icon on the right-hand toolbar, and follow the instructions that appear.

Download PyCharm 2024.3 EAP

William Minchin: u202410012332 [Planet Python]

Microblogging v1.3.0 for Pelican released! Posts should now sort as expected. Thanks @ashwinvis. on PyPI

02-10-2024

08:48

Linux Mint Gives First Look at New Cinnamon Theme [OMG! Ubuntu!]

Linux Mint logo on a green backgroundAs revealed last month, Linux Mint is working on an improved default theme for the Cinnamon desktop – and today we got our first look at what’s coming. The way Cinnamon looks in Linux Mint (the distribution) is not the way it looks if you install the Cinnamon desktop yourself on a different distro. There, assuming a theme pack is isn’t pulled in as a dependency, you’ll see the default built-in Cinnamon theme. And it’s that built-in theme that Linux Mint is currently improving. Mint says “the new default theme [is] much darker and contrasted than before. Objects are rounded […]

You're reading Linux Mint Gives First Look at New Cinnamon Theme, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

PyCoder’s Weekly: Issue #649 (Oct. 1, 2024) [Planet Python]

#649 – OCTOBER 1, 2024
View in Browser »

The PyCoder’s Weekly Logo


Python 3.13: Cool New Features for You to Try

In this tutorial, you’ll learn about the new features in Python 3.13. You’ll take a tour of the new REPL and error messages and see how you can try out the experimental free threading and JIT versions of Python 3.13 yourself.
REAL PYTHON

Incremental GC and Pushing Back the 3.13.0 Release

Some last minute performance considerations are delaying the release of Python 3.13 with one of the features being backed out. The new target is next week.
PYTHON.ORG

Debug With pdb and breakpoint()

Python ships with a command-line based debugger called pdb. To set a breakpoint, you call the breakpoint() function in your code. This post introduces you to pdb and debugging from the command-line.
JUHA-MATTI SANTALA

Last Chance to be Part of Snyk’s DevSeccon 2024 Oct 8-9!

alt

Don’t miss out on your chance to register for DevSecCon 2024! From the exciting lineup of 20+ sessions, here’s one that you can’t skip: Ali Diamond, from Hak5: “I’m A Software Engineer, and I Have to Make Bad Security Decisions—why?” Save your spot →
SNYK.IO sponsor

Django Project Ideas

Looking to experiment or build your portfolio? Discover creative Django project ideas for all skill levels, from beginner apps to advanced full-stack projects.
EVGENIA VERBINA

Articles & Tutorials

Python 3.13 Preview: A Modern REPL

In this tutorial, you’ll explore one of Python 3.13’s new features: a new and modern interactive interpreter, also known as a REPL.
REAL PYTHON

When Should You Upgrade to Python 3.13?

This post talks about the pros and cons of upgrading to Python 3.13 and why you might do it immediately or wait for the first patch release in December.
ITAMAR TURNER-TRAURING

Refactoring Python With Tree-Sitter & Jedi

Jack was toying around with a refactor where he wanted to replace a variable name across a large number of files. His usual tools of grep and sed weren’t sufficient, so he tried tree-sitter instead. Associated HN Discussion.
JACK EVANS

rerankers: A Lightweight Library to Unify Ranking Methods

Information retrieval often uses a two-stage pipeline, where the first stage does a quick pass and the second re-ranks the content. This post talks about re-ranking, the different methods out there, and introduces a Python library to help you out.
BENJAMIN CLAVIE

Counting Sheeps With Contracts in Python

A code contract is a way of specifying how your code is supposed to perform. They can be useful for tests and to generally reduce the number of bugs in your code. This article introduces you to the concept and the dbc library.
LÉO GERMOND

Paying Down Tech Debt: Further Learnings

Technical debt is the accumulation of design decisions that eventually slow teams down. This post talks about two ways to pay it down: using tech debt payments to get into the flow, and what you need before doing a big re-write.
GERGELY OROSZ

Asyncio gather() in the Background

The asyncio.gather() method works as the meeting point for multiple co-routines, but it doesn’t have to be a synchronous call. This post teaches you how to use .gather() in the background.
JASON BROWNLEE

Advanced Python import Techniques

The Python import system is as powerful as it is useful. In this in-depth video course, you’ll learn how to harness this power to improve the structure and maintainability of your code.
REAL PYTHON course

Mentors

Ryan just finished his second round mentoring with the Djangonaut.Space program. This post talks about both how you can help your mentor help you, and how to be a good mentor.
RYAN CHELEY

Customising Object Creation With __new__

The dunder method __new__ is used to customise object creation and is a core stepping stone in understanding metaprogramming in Python.
RODRIGO GIRÃO SERRÃO

Prompting a User for Input

This short post shows you how to prompt your users for input with Python’s built-in input() function.
TREY HUNNER

When and How to Start Coding With Kids

Talk Python interviews Anna-Lena Popkes and they talk about how and when to teach coding to children.
TALK PYTHON podcast

Projects & Code

Events

PyConZA 2024

October 3 to October 5, 2024
PYCON.ORG

PyCon ES 2024

October 4 to October 6, 2024
PYCON.ORG

Django Day Copenhagen 2024

October 4 to October 5, 2024
DJANGODAY.DK

PyCon Uganda 2024

October 9 to October 14, 2024
PYCON.ORG

PyCon NL 2024

October 10 to October 11, 2024
PYCON.ORG


Happy Pythoning!
This was PyCoder’s Weekly Issue #649.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

PyCharm: Python 3.13 and the Latest Trends: A Developer’s Guide to 2025 [Planet Python]

We invite you to join us in just two days time, on October 3 at 5:00 pm CEST (11:00 am EDT), for a livestream shining a spotlight on Python 3.13 and the trends shaping its development.

Our speakers:

  • Łukasz Langa, CPython Developer in Residence, release manager for Python 3.8–3.9, and creator of Black.
  • Tania Allard, Vice-chair of the PSF board, PSF fellow, and Director at Quansight Labs.

They will discuss the most notable features of Python 3.13 and examine the industry trends likely to influence its future. This is a great opportunity to get ahead of the release and ask your questions directly to the experts.

Don’t forget to enable YouTube notifications and mark your calendar.

PyCharm: PyCharm’s Interactive Tables for Data Science [Planet Python]

Data cleaning, exploration, and visualization are some of the most time-consuming tasks for data scientists. Nearly 50% of data specialists dedicate 30% or more of their time to data preparation. The pandas and Polars libraries are widely used for these purposes, each offering unique advantages. PyCharm supports both libraries, enabling users to efficiently explore, clean, and visualize data, even with large datasets.

In this blog post, you’ll discover how PyCharm’s interactive tables can enhance your productivity when working with either Polars or pandas. You will also learn how to perform many different data exploration tasks without writing any code and how to use JetBrains AI Assistant for data analysis.

Getting started 

To start using pandas for data analysis, import the library and load data from a file using pd.read_csv(“FileName”), or drag and drop a CSV file into a Jupyter notebook. If you’re using Polars, import the library and use pl.read_csv(“FileName/path to the file”) to load data into a DataFrame. Then, print the dataset just by using the name of the variable.

PyCharm’s interactive tables – key features and uses

Browse, sort, and view datasets

Interactive tables offer a wide range of features that allow you to easily explore your data. For example, you can navigate through your data with infinite horizontal and vertical scrolling, use single and multiple column sorting, and many other features.

This feature allows you to sort columns alphabetically or maintain the existing column order. You can also find specific columns by typing the column name in the Column List menu. Through the context menu or Column List, you can selectively hide or display columns. For deeper analysis, you can hide all but the essential columns or use the Hide Other Columns option to focus on a single column.

Finally, you can open your dataframe in a separate window for even more in-depth analysis.

Explore your data 

You can easily understand data types directly from column headers. For example, is used for a data type object, while indicates numeric data.

Data Types

Additionally, you can access descriptive statistics by hovering over column headers in Compact mode or view them directly in Detailed mode, where distribution histograms are also available.

Create code-free data visualizations

Interactive tables also offer several features available in the Chart view section.

  • No-code chart creation, allowing you to visualize data effortlessly.
Graphs comparison
  • Ability to save your charts with one click.

Use AI Assistant for data analysis and visualization

You can access the AI Assistant in the upper-left corner of the tables for the following purposes:

  • To access insights about your data quickly with AI Assistant.
  • Use AI Assistant to visualize your data.

Using interactive tables for reliable Exploratory Data Analysis (EDA)

Why is EDA important? 

Exploratory Data Analysis (EDA) is a crucial step in data science, as it allows data scientists to understand the underlying structure and patterns within a dataset before applying any modeling techniques. EDA helps you identify anomalies, detect outliers, and uncover relationships among variables – all of which are essential for making informed decisions.

Interactive tables offer many features that allow you to explore your data faster and get reliable results.

Spotting statistics, patterns, and outliers 

Viewing the dataset information

Let’s look at a real-life example of how the tables could boost the productivity of your EDA. For this example, we will use the Bengaluru House Dataset. Normally we start with an overview of our data. This includes just viewing it to understand the size of the dataset, data types of the columns, and so on. While you can certainly do this with the help of code, using interactive tables allows you to get this data without code. So, in our example, the size of the dataset is 13,320 rows and 9 columns, as you can see in the table header.

Rows and columns information

Our dataset also contains different data types, including numeric and string data. This means we can use different techniques for working with data, including correlation analysis and others.

Data types

And of course you can take a look at the data with the help of infinite scrolling and other features we mentioned above.

Performing statistical analysis

After getting acquainted with the data, the next step might be more in-depth analysis of the statistics. PyCharm provides a lot of important information about the columns in the table headers, including missing data, mode, mean, median, and so on.
For example, here we see that many columns have missing data. In the “bath” column, we obviously have an outlier, as the max value significantly exceeds the 95th percentile.

Additionally, data type mismatches, such as “total_sqft” not being a float or integer, indicate inconsistencies that could impact data processing and analysis.

After sorting, we notice one possible reason for the problem: the use of text values in data and ranges instead of normal numerical values.

Analyzing the data using AI

Additionally, if our dataset doesn’t have hundreds of columns, we can use the help of AI Assistant and ask it to explain the DataFrame. From there, we can prompt it with any important questions, such as “What data problems in the dataset should be addressed and how?”

AI Assistant

Visualizing data with built-in charting

In some cases, data visualization can help you understand your data. PyCharm interactive tables provide two options for that. The first is Chart View and the second is Generate Visualizations in Chat

Let’s say my hypothesis is that the price of a house should be correlated with its total floor area. In other words, the bigger a house is, the more expensive it should be. In this case, I can use a scatter plot in Chart View and discover that my hypothesis is likely correct.

Wrapping up

PyCharm Professional’s interactive tables offer numerous benefits that significantly boost your productivity in data exploration and data cleaning. The tables allow you to work with the most popular data science library, pandas, and the fast-growing framework Polars, without writing any code. This is because the tables provide features like browsing, sorting, and viewing datasets; code-free visualizations; and AI-assisted insights.

Interactive tables in PyCharm not only save your time but also reduce the complexity of data manipulation tasks, allowing you to focus on deriving meaningful insights instead of writing boilerplate code for basic tasks.

Download PyCharm Professional and get an extended 60-day trial by using the promo code “PyCharmNotebooks”. The free subscription is available for individual users only.

For more information on interactive tables in PyCharm, check out our related blogs, guides, and documentation:

Real Python: Differences Between Python's Mutable and Immutable Types [Planet Python]

As a Python developer, you’ll have to deal with mutable and immutable objects sooner or later. Mutable objects are those that allow you to change their value or data in place without affecting the object’s identity. In contrast, immutable objects don’t allow this kind of operation. You’ll just have the option of creating new objects of the same type with different values.

In Python, mutability is a characteristic that may profoundly influence your decision when choosing which data type to use in solving a given programming problem. Therefore, you need to know how mutable and immutable objects work in Python.

In this video course, you’ll:

  • Understand how mutability and immutability work under the hood in Python
  • Explore immutable and mutable built-in data types in Python
  • Identify and avoid some common mutability-related gotchas
  • Understand and control how mutability affects your custom classes

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Python Insider: Python 3.12.7 released [Planet Python]

  

I'm pleased to announce the release of Python 3.12.7:

https://www.python.org/downloads/release/python-3127/

 

This is the seventh maintenance release of Python 3.12

Python 3.12 is the newest major release of the Python programming language, and it contains many new features and optimizations. 3.12.7 is the latest maintenance release, containing more than 100 bugfixes, build improvements and documentation changes since 3.12.6.

 

Major new features of the 3.12 series, compared to 3.11

 

New features

Type annotations

Deprecations

  • The deprecated wstr and wstr_length members of the C implementation of unicode objects were removed, per PEP 623.
  • In the unittest module, a number of long deprecated methods and classes were removed. (They had been deprecated since Python 3.1 or 3.2).
  • The deprecated smtpd and distutils modules have been removed (see PEP 594 and PEP 632. The setuptools package continues to provide the distutils module.
  • A number of other old, broken and deprecated functions, classes and methods have been removed.
  • Invalid backslash escape sequences in strings now warn with SyntaxWarning instead of DeprecationWarning, making them more visible. (They will become syntax errors in the future.)
  • The internal representation of integers has changed in preparation for performance enhancements. (This should not affect most users as it is an internal detail, but it may cause problems for Cython-generated code.)

For more details on the changes to Python 3.12, see What’s new in Python 3.12.

 

More resources

 

Enjoy the new releases

Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organization contributions to the Python Software Foundation.


Your release team,
Thomas Wouters
Łukasz Langa
Ned Deily
Steve Dower

01-10-2024

18:01

Mission Center (Linux System Monitor) Now Reports Fan Info [OMG! Ubuntu!]

Mission Center Linux system monitor app screenshotA major new release of Mission Center, a modern system monitor app for Linux desktops, has been released. Fans of this Rust-based GTK4/libadwaita system monitoring tool (which to address the recurring elephant in the room does indeed have a user interface inspired by—now I’d argue superior to—the Windows system monitor app) will find a lot to like in the latest update. I’m not going to recap all of this tool’s existing features in this post as I’ve covered this app a few times in the past. The Mission Center homepage has more details for the uninitiated. Instead, I’m going focus […]

You're reading Mission Center (Linux System Monitor) Now Reports Fan Info, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Robin Wilson: I won two British Cartographic Society awards! [Planet Python]

It’s been a while since I posted here – I kind of lost momentum over the summer (which is a busy time with a school-aged child) and never really picked it up again.

Anyway, I wanted to write a quick post to tell people that I won two awards at the British Cartographic Society awards ceremony a few weeks ago.

They were both for my British Placename Mapper web app, which is described in more detail in this blog post. If you haven’t seen it already, I strongly recommend you check it out.

I won a Highly Commended certificate in the Avenza Award for Electronic Mapping, and the First Prize trophy for the Ordnance Survey Award (for any map using OS data).

The certificates came in a lovely frame, and the trophy is enormous – about 30cm high and weighing over 3kg!

Here’s the trophy:

I was presented with the trophy at the BCS Annual Conference in London, but they very kindly offered to keep the trophy to save me carrying it across London on my wheelchair and back on the train, so they invited me to Ordnance Survey last week to be presented with it again. I had a lovely time at OS – including 30 minutes with their Director General/CEO and was formally presented with my trophy again (standing in front of the first ever Ordnance Survey map!):

Full information on the BCS awards are available on their website and I strongly recommend submitting any appropriate maps you’ve made for next year’s awards. I need to get my thinking cap on for next year’s entry…

09:12

Arch Linux Is Now Working Directly With Valve [Slashdot: Linux]

The Arch Linux team has announced a collaboration with Valve, working to support critical infrastructure projects like a build service and secure signing enclave for the Arch Linux distribution. Tom's Hardware reports: If you're familiar with Valve and Steam Deck, you may already know that the Deck uses SteamOS 3, which is built on top of Arch Linux. Thanks to the Arch Linux base and Valve's development of the Proton compatibility layer for playing Windows games on Linux, we now have a far improved Linux gaming scene, especially on Valve's Steam Deck and Deck OLED handhelds. While Valve's specific reasons for picking Arch Linux for Steam Deck remain unknown, it's pretty easy to guess why it was picked. Mainly, it's a particularly lightweight distribution maintained since March 2002, which lends itself well to gaming with minimal performance overhead. A more intensive Linux distribution may not have been the ideal base for SteamOS 3, which is targeted at handhelds like Steam Deck first. As primary Arch Linux developer Levente Polyak discloses in the announcement post, "Valve is generously providing backing for two critical projects that will have a huge impact on our distribution: a build service infrastructure and a secure signing enclave. By supporting work on a freelance basis for these topics, Valve enables us to work on them without being limited solely by the free time of our volunteers." Polyak continues, "This opportunity allows us to address some of the biggest outstanding challenges we have been facing for a while. The collaboration will speed up the progress that would otherwise take much longer for us to achieve, and will ultimately unblock us from finally pursuing some of our planned endeavors [...] We believe this collaboration will greatly benefit Arch Linux, and are looking forward to share further development on the mailing list as work progresses."

Read more of this story at Slashdot.

VirtualBox 7.1.2 Adds Support for 3D Acceleration in ARM VMs [OMG! Ubuntu!]

Oracle has release a new maintenance update for VirtualBox, its open-source virtualisation software. VirtualBox 7.1.2 is the first such point release since the VirtualBox 7.1 series debuted earlier this month. Naturally, it builds on that major release with a flurry of big fixes, performance finesse, and UI refinements, and adds a few new features. Among them, the latest version adds support for a multi-window layout, gives users the option to choose remote display security method, and fixes for a 3D acceleration-related quirks, including black screens in Windows VMs and minor rendering issues. A bug fixes ensures virtual machines created using […]

You're reading VirtualBox 7.1.2 Adds Support for 3D Acceleration in ARM VMs, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Ubuntu Patches ‘Severe’ Security Flaw in CUPS [OMG! Ubuntu!]

If you’ve cast a half-glazed eye over Linux social media feeds at some point in the past few days you may have caught wind that a huge Linux security flaw was about to be disclosed. And today it was: a remote code execution flaw affecting the CUPS printing stack used in most major desktop Linux distributions (including Ubuntu, and also Chrome OS). With a severity score of 9.9 it’s right at the edge of the most severe vulnerabilities possible. The CUPS Security Vulnerability Canonical explains in its security blog: “At its core, the vulnerability is exploited by tricking CUPS into […]

You're reading Ubuntu Patches ‘Severe’ Security Flaw in CUPS, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

30-09-2024

20:08

COSMIC DE Alpha 2 Released, This is What’s New [OMG! Ubuntu!]

Chocks away —British saying, don’t stare at me weirdly— as the second alpha of System76’s homegrown COSMIC desktop environment has been released. To make it easy for us all to try out the latest improvements a second alpha build of Pop!_OS 24.04 is also available to download. Those who installed the first Pop!_OS 24.04 alpha don’t need to re-install. All of the improvements in this post are available as software updates via the COSMIC App Store. Not that anyone needs to use Pop!_OS to try the COSMIC. This Rust-based DE is also available to test on a wide range of […]

You're reading COSMIC DE Alpha 2 Released, This is What’s New, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Ubuntu 24.10 ARM ISO Supports the ThinkPad X13s [OMG! Ubuntu!]

Ubuntu 24.10 supports the Snapdragon-powered Lenovo ThinkPad X13s laptop in the official ‘generic’ ARM64 ISO — a notable change. Although it is possible to use Ubuntu 23.10 on the Thinkpad X13s it requires using of a custom ISO spun-up specifically for this device. Ubuntu 24.04 LTS had no official installer image for this device (it is possible to upgrade to 24.04 from 23.10, albeit with caveats). But with the arrival of Ubuntu 24.10 in October, the standard Ubuntu ARM64 ISO (which works much like a regular Intel/AMD ISO, with a live session and guided installer) will happily boot on this […]

You're reading Ubuntu 24.10 ARM ISO Supports the ThinkPad X13s, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

29-09-2024

10:20

How I Booted Linux On an Intel 4004 from 1971 [Slashdot: Linux]

Long-time Slashdot reader dmitrygr writes: Debian Linux booted on a 4-bit intel microprocessor from 1971 — the first microprocessor in the world — the 4004. It is not fast, but it is a real Linux kernel with a Debian rootfs on a real board whose only CPU is a real intel 4004 from the 1970s. There's a detailed blog post about the experiment. (Its title? "Slowly booting full Linux on the intel 4004 for fun, art, and absolutely no profit.") In the post dmitrygr describes testing speed optimizations with an emulator where "my initial goal was to get the boot time under a week..."

Read more of this story at Slashdot.

28-09-2024

10:05

Unlock Your Creativity: Building and Testing Websites in the Ubuntu Web Development Playground [Linux Journal - The Original Magazine of the Linux Community]

Unlock Your Creativity: Building and Testing Websites in the Ubuntu Web Development Playground

Introduction

Ubuntu stands out as one of the most popular Linux distributions among web developers due to its stability, extensive community support, and robust package management. This article dives into creating a dedicated web development environment in Ubuntu, guiding you from the initial system setup to deploying and maintaining your websites.

Setting Up Ubuntu for Web Development

System Requirements and Installation Basics

Before diving into web development, ensure your Ubuntu installation is up to date. Ubuntu can run on a variety of hardware, but for a smooth development experience, a minimum of 4GB RAM and 25GB of available disk space is recommended. After installing Ubuntu, update your system:

sudo apt update && sudo apt upgrade

Installing Essential Packages

Web development typically involves a stack of software that includes a web server, a database system, and programming languages. Install the LAMP (Linux, Apache, MySQL, PHP) stack using:

sudo apt install apache2 mysql-server php libapache2-mod-php php-mysql

For JavaScript development, install Node.js and npm:

sudo apt install nodejs npm

Recommended Text Editors and IDEs

Choose an editor that enhances your coding efficiency. Popular choices include:

  • Visual Studio Code (VS Code): Lightweight and powerful, with extensive plugin support.
  • Sublime Text: Known for speed and efficiency, with a vast array of language packages.
  • PhpStorm: Ideal for PHP developers, offering deep code understanding and top-notch coding assistance.

Creating a Development Environment

Setting Up Local Web Servers

Apache and Nginx are the most popular web servers. Apache is generally easier to configure for beginners:

sudo systemctl start apache2 sudo systemctl enable apache2

Nginx, alternatively, offers high performance and low resource consumption:

sudo apt install nginx sudo systemctl start nginx sudo systemctl enable nginx

Configuring Backend Languages

Configure PHP by adjusting settings in php.ini to suit your development needs, often found in /etc/php/7.4/apache2/php.ini. Python and other languages can be set up similarly, ensuring they are properly integrated with your web server.

Using Containerization Tools

Docker and Kubernetes revolutionize development by isolating environments and streamlining deployment:

27-09-2024

08:58

Tor Project Merges With Tails [Slashdot: Linux]

The Tor Project: Today the Tor Project, a global non-profit developing tools for online privacy and anonymity, and Tails, a portable operating system that uses Tor to protect users from digital surveillance, have joined forces and merged operations. Incorporating Tails into the Tor Project's structure allows for easier collaboration, better sustainability, reduced overhead, and expanded training and outreach programs to counter a larger number of digital threats. In short, coming together will strengthen both organizations' ability to protect people worldwide from surveillance and censorship. Countering the threat of global mass surveillance and censorship to a free Internet, Tor and Tails provide essential tools to help people around the world stay safe online. By joining forces, these two privacy advocates will pool their resources to focus on what matters most: ensuring that activists, journalists, other at-risk and everyday users will have access to improved digital security tools. In late 2023, Tails approached the Tor Project with the idea of merging operations. Tails had outgrown its existing structure. Rather than expanding Tails's operational capacity on their own and putting more stress on Tails workers, merging with the Tor Project, with its larger and established operational framework, offered a solution. By joining forces, the Tails team can now focus on their core mission of maintaining and improving Tails OS, exploring more and complementary use cases while benefiting from the larger organizational structure of The Tor Project. This solution is a natural outcome of the Tor Project and Tails' shared history of collaboration and solidarity. 15 years ago, Tails' first release was announced on a Tor mailing list, Tor and Tails developers have been collaborating closely since 2015, and more recently Tails has been a sub-grantee of Tor. For Tails, it felt obvious that if they were to approach a bigger organization with the possibility of merging, it would be the Tor Project.

Read more of this story at Slashdot.

How to Disable the ‘Recent’ Files Section in Nautilus [OMG! Ubuntu!]

Remove Recent folder from NautilusThere’s one feature in the Nautilus file manager I use daily: the Recent files shortcut. One-click brings up a pseudo-folder showing all of my recently downloaded, modified, and newly created files, regardless of which folders they’re in. I find this grouping dead handy – but I accept it’s also dead revealing too. Which is why not everyone likes this feature. Individual files can be hidden from view manually, but that’s effort. Since ‘Recent’ is pinned at the top of the sidebar, it’s easy to accidentally click it. Not an issue for most of us at home, but for those in […]

You're reading How to Disable the ‘Recent’ Files Section in Nautilus, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

See Real-Time Power Usage (in Watts) in Ubuntu’s Top Panel [OMG! Ubuntu!]

If you’re looking for a no-fuss way to see real-time energy consumption on your Ubuntu laptop as you use it, a new GNOME Shell extension makes this deliciously easy. “Why would I want to see energy usage?” – anyone asking that question probably doesn’t. This is more for the curious folk; those keen to reveal the relative power demands of the software they run, the tasks they perform, they hardware settings they use, and the devices they connect – more of an educational tool than an essential one. Of course, you can monitor power consumption on Linux without any extension. […]

You're reading See Real-Time Power Usage (in Watts) in Ubuntu’s Top Panel, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

Harnessing the Power of Linux to Drive Innovations in Neuroscience Research [Linux Journal - The Original Magazine of the Linux Community]

Harnessing the Power of Linux to Drive Innovations in Neuroscience Research

Introduction

The world of scientific computing has consistently leaned on robust, flexible operating systems to handle the demanding nature of research tasks. Linux, with its roots deeply embedded in the realms of free and open-source software, stands out as a powerhouse for computational tasks, especially in disciplines that require extensive data processing and modeling, such as neuroscience. This article delves into how Linux not only supports but significantly enhances neuroscience research, enabling breakthroughs that might not be as feasible with other operating systems.

The Role of Linux in Scientific Research

Linux is not just an operating system; it's a foundation for innovation, particularly in scientific research. Its design principles — stability, performance, and adaptability — make it an ideal choice for the computational demands of modern science. Globally, research institutions and computational labs have adopted Linux due to its superior handling of complex calculations and vast networks of data-processing operations.

Advantages of Linux in Neuroscience Research

Open Source Nature

One of the most compelling features of Linux is its open-source nature, which allows researchers to inspect, modify, and enhance the source code to suit their specific needs. This transparency is crucial in neuroscience, where researchers often need to tweak algorithms or simulations to reflect the complexity of neural processes accurately.

  • Collaborative Environment: The ability to share improvements and innovations without licensing restrictions fosters a collaborative environment where researchers worldwide can build upon each other's work. This is particularly valuable in neuroscience, where collective advancements can lead to quicker breakthroughs in understanding neurological disorders.

  • Customization and Innovation: Researchers can develop and share custom-tailored solutions, such as neural network simulations and data analysis tools, without the constraints of commercial software licenses.

Customization and Control

Linux offers unparalleled control over system operations, allowing researchers to optimize their computing environment down to the kernel level.

  • Custom Kernels: Neuroscience researchers can benefit from custom kernels that are optimized for tasks such as real-time data processing from neuroimaging equipment or managing large-scale neural simulations.

  • Performance Optimization: Linux allows the adjustment of system priorities to favor computation-heavy processes, crucial for running extensive simulations overnight or processing large datasets without interruption.

26-09-2024

08:42

Critical Unauthenticated RCE Flaw Impacts All GNU/Linux Systems [Slashdot: Linux]

"Looks like there's a storm brewing, and it's not good news," writes ancient Slashdot reader jd. "Whether or not the bugs are classically security defects or not, this is extremely bad PR for the Linux and Open Source community. It's not clear from the article whether this affects other Open Source projects, such as FreeBSD." From a report: A critical unauthenticated Remote Code Execution (RCE) vulnerability has been discovered, impacting all GNU/Linux systems. As per agreements with developers, the flaw, which has existed for over a decade, will be fully disclosed in less than two weeks. Despite the severity of the issue, no Common Vulnerabilities and Exposures (CVE) identifiers have been assigned yet, although experts suggest there should be at least three to six. Leading Linux distributors such as Canonical and RedHat have confirmed the flaw's severity, rating it 9.9 out of 10. This indicates the potential for catastrophic damage if exploited. However, despite this acknowledgment, no working fix is still available. Developers remain embroiled in debates over whether some aspects of the vulnerability impact security.

Read more of this story at Slashdot.

The Document Foundation announces the LibreOffice and Open Source Conference 2024 [Press Releases Archives - The Document Foundation Blog]

Berlin, 25 September 2024 – The LibreOffice and Open Source Conference 2024 will take place in Luxembourg from the 10 to the 12 October 2024. It will be hosted by the Digital Learning Hub and the local campus of 42 Luxembourg at the Terres Rouges buildings in Belval, Esch-sur-Alzette.

This is a key event that brings together the LibreOffice community – supporting the leading FOSS office suite – with a large number of stakeholders: large open source projects, international organizations and representatives from EU institutions and European government departments.

Organized in partnership with the Luxembourg Media & Digital Design Centre (LMDDC), which will host the EdTech track, the event is sponsored by allotropia and Collabora, the two companies contributing more actively to the development of LibreOffice; Passbolt, the Luxembourg made open source password manager for teams; and the Interdisciplinary Centre for Security, Reliability and Trust (SnT) of the University of Luxembourg.

In addition, local partners such as Luxembourg Convention Bureau, LIST, LU-CIX and Luxembourg House of Cybersecurity are supporting the organization of various aspects of the conference.

After the opening session in the morning of the 10 October, which includes institutional presentations from the Minister for Digitalisation, the Ministry of the Economy and the European Commission’s OSPO, there will be tracks about LibreOffice covering development, quality, security, documentation, localization, marketing and enterprise deployments, and tracks about open source covering technologies in education, OSS applications and cybersecurity. Another session will focus on OSPOs (Open Source Programme Officers).

The LibreOffice and Open Source Conference Luxembourg 2024 provides a platform to discuss the latest technical developments, community contributions, and the challenges facing open source software and communities of which TDF, LibreOffice and its community are important components. Professionals, developers, volunteers and users from various fields will share their experiences and collaborate on the future direction of the leading office suite.

Policy and decision makers will find counterparts from all over Europe with which they will be able to exchange ideas and experiences that will help them to promote and implement open source software in public, education and private sector organizations.

On 11 and 12 October, there will also be workshops focusing on different aspects of LibreOffice development, targeted to undergraduate Computer Science students or anyone who knows programming, and wants to become familiar with a large scale real world open source software project. To be able to better support the participants we limited the number of seats to 20 so register for the workshops as soon as possible to reserve your place.

Everyone is encouraged to register and participate in the conference to engage with the open source community, learn from different experts and contribute to meaningful discussions. Please note that, to avoid waste, we will plan for food, drinks and other free items for registered attendees so help us to cater for your needs by registering in time.

25-09-2024

09:12

Ubuntu 24.10 Beta Released, Available to Download [OMG! Ubuntu!]

A beta of Ubuntu 24.10 ‘Oracular Oriole’ is now available to download – a day later than planned! Developers and non-developers alike can download the beta to try the new features in Ubuntu 24.10, check compatibility, and flag any issue for fixing before the stable release takes flight next month. “The Beta images are known to be reasonably free of showstopper image build or installer bugs, while representing a very recent snapshot of 24.10 that should be representative [the final release]”, says Canonical’s Utkarsh Gupta. This is the only beta release planned, though a release candidate will follow in few […]

You're reading Ubuntu 24.10 Beta Released, Available to Download, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

23-09-2024

08:42

Vivaldi Web Browser is Now Available as a Snap [OMG! Ubuntu!]

Vivaldi browser logoVivaldi web browser has arrived on the Canonical Snap Store – officially. This closed-source, Chromium-based web browser has been available on Linux since its debut in 2015, providing an official DEB package for Ubuntu users (which adds an APT repo for ongoing updates). And last year it became possible to get Vivaldi on Flathub – though that Flatpak build is only semi-official: maintained and packaged by a Vivaldi engineer, but not a recommended or supported package by Vivaldi itself – not yet, anyway! So to hear Vivaldi is embracing the Snap format is an interesting, albeit not surprising, move. It’s […]

You're reading Vivaldi Web Browser is Now Available as a Snap, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

21-09-2024

19:31

Torvalds Weighs in On 'Nasty' Rust vs C For Linux Debate [Slashdot: Linux]

The Rust vs C battle raging in Linux circles has left even Linus Torvalds scratching his head. "I'm not sure why Rust has been such a contentious area," the Linux creator mused at this week's Open Source Summit, likening the fervor to ancient text editor wars. "It reminds me of when I was young and people were arguing about vi versus Emacs." The spat over integrating Rust into Linux has been brewing since 2022, with critics slamming it as an "insult" to decades of kernel work. One maintainer recently quit, fed up with the "nontechnical nonsense." Torvalds struck a surprisingly diplomatic tone. He praised how Rust has "livened up discussions" while admitting some arguments get "nasty." "C is, in the end, a very simple language," Torvalds said, explaining its appeal and pitfalls. "Because it's simple it's also very easy to make mistakes. And Rust is not." Torvalds remains upbeat about Rust's future in Linux, nonetheless. "Even if it were to become a failure -- and I don't think it will -- that's how you learn," he said.

Read more of this story at Slashdot.

20 Years Later, Real-Time Linux Makes It To the Kernel [Slashdot: Linux]

ZDNet's Steven Vaughan-Nichols reports: After 20 years, Real-Time Linux (PREEMPT_RT) is finally -- finally -- in the mainline kernel. Linus Torvalds blessed the code while he was at Open Source Summit Europe. [...] The real-time Linux code is now baked into all Linux distros as of the forthcoming Linux 6.12 kernel. This means Linux will soon start appearing in more mission-critical devices and industrial hardware. But it took its sweet time getting here. An RTOS is a specialized operating system designed to handle time-critical tasks with precision and reliability. Unlike general-purpose operating systems like Windows or macOS, an RTOS is built to respond to events and process data within strict time constraints, often measured in milliseconds or microseconds. As Steven Rostedt, a prominent real-time Linux developer and Google engineer, put it, "Real-time is the fastest worst-case scenario." He means that the essential characteristic of an RTOS is its deterministic behavior. An RTOS guarantees that critical tasks will be completed within specified deadlines. [...] So, why is Real-Time Linux only now completely blessed in the kernel? "We actually would not push something up unless we thought it was ready," Rostedt explained. "Almost everything was usually rewritten at least three times before it went into mainline because we had such a high bar for what would go in." In addition, the path to the mainline wasn't just about technical challenges. Politics and perception also played a role. "In the beginning, we couldn't even mention real-time," Rostedt recalled. "Everyone said, 'Oh, we don't care about real-time.'" Another problem was money. For many years funding for real-time Linux was erratic. In 2015, the Linux Foundation established the Real-Time Linux (RTL) collaborative project to coordinate efforts around mainlining PREEMPT_RT. The final hurdle for full integration was reworking the kernel's print_k function, a critical debugging tool dating back to 1991. Torvalds was particularly protective of print_k --He wrote the original code and still uses it for debugging. However, print_k also puts a hard delay in a Linux program whenever it's called. That kind of slowdown is unacceptable in real-time systems. Rostedt explained: "Print_k has a thousand hacks to handle a thousand different situations. Whenever we modified print_k to do something, it would break one of these cases. The thing about print_k that's great about debugging is you can know exactly where you were when a process crashed. When I would be hammering the system really, really hard, and the latency was mostly around maybe 30 microseconds, and then suddenly it would jump to five milliseconds." That delay was the print_k message. After much work, many heated discussions, and several rejected proposals, a compromise was reached earlier this year. Torvalds is happy, the real-time Linux developers are happy, print_K users are happy, and, at long last, real-time Linux is real.

Read more of this story at Slashdot.

Linus Torvalds Muses About Maintainer Gray Hairs, Next 'King of Linux' [Slashdot: Linux]

An anonymous reader quotes a report from ZDNet, written by Steven Vaughan-Nichols: In a candid keynote chat at the Linux Foundation's Open Source Summit Europe, Linux creator Linus Torvalds shared his thoughts on kernel development, the integration of Rust, and the future of open source. Dirk Hohndel, Verizon's Open Source Program Office head and Torvalds friend, moderated their conversation about the Linux ecosystem. Torvalds emphasized that kernel releases, like the recent 6.11 version, are intentionally not exciting. "For almost 15 years, we've had a very good regular cadence of releases," he explained. With releases every nine weeks, this regularity aims for timeliness and reliability rather than flashy new features. The Linux creator noted that while drivers still make up the bulk of changes, core kernel development continues to evolve. "I'm still surprised that we're doing very core development," Torvalds said, mentioning ongoing work in virtual file systems and memory management. [...] Shifting back to another contentious subject -- maintainer burnout and succession planning -- Hohndel observed that "maintainers are aging. Strangely, some of us have, you know, not quite as much or the right hair color anymore." (Torvalds interjected that "gray is the right color.") Hohndel continued, "So the question that I always ask myself: Is it about time to talk about there being a mini-Linus?" Torvalds turned the question around. True, the Linux maintainers are getting older and people do burn out and go away. "But that's kind of normal. What is not normal is that people actually stay around for decades. That's the unusual thing, and I think that's a good sign." At the same time, Torvalds admitted, it can be intimidating for a younger developer to join the Linux kernel team "when you see all these people who have been around for decades, but at the same time, we have many new developers. Some of those new developers come in, and three years later, they are top maintainers." Hohndel noted that "to be the king of Linux, the main maintainer, you have to have a lot of experience. And the backup right now is Greg KH (Greg Kroah-Hartman, maintainer of the stable Linux kernel), who is about the same age as we are and has even less hair." True, Torvalds responded, "But the thing is, Greg hasn't always been Greg. Before Greg, there's been Andrew {Morton) and Alan (Cox). After Greg, there will be Shannon and Steve. The real issue is you have to have a person or a group of people that the development community can trust, and part of trust is fundamentally about having been around for long enough that people know how you work, but long enough does not mean to be 30 years." Hohndel made one last comment: "What I'm trying to say is, you've been doing this for 33 years. I don't want to be morbid, but I think in 33 years, you may no longer be doing this?" Torvalds, making motions as though he was using a walker, replied, "I would love to still do this conference with you." The report notes the contention around the integration of Rust, highlighted by the recent departure of Rust for Linux maintainer Wedson Filho. Despite resistance from some devs who prefer C and are skeptical of Rust, Torvalds remains optimistic about Rust's future in the kernel. He said: "Rust is a very different thing, and there are a lot of people who are used to the C model. They don't like the differences, but that's OK. In the kernel itself, absolutely nobody understands everything. I don't. I rely heavily on maintainers of various subsystems. I think the same can be true of Rust and C. I think it's one of our strengths in the kernel that we can specialize. Clearly, some people just don't like the notion of Rust and having Rust encroach on their area. But we've only been doing Rust for a couple of years, so it's way too early to say Rust is a failure." Meanwhile, Torvalds confirmed that the long-anticipated real-time Linux (RTLinux) project will finally be integrated into the kernel with the upcoming release of Linux 6.12.

Read more of this story at Slashdot.

Linux Kernel 6.11 is Out [Slashdot: Linux]

Linux creator Linus Torvalds has released version 6.11 of the open-source operating system kernel. The new release, while not considered major by Torvalds, introduces several notable improvements for AMD hardware users and Arch Linux developers. ZDNet: This latest version introduces several enhancements, particularly for AMD hardware users, while offering broader system improvements and new capabilities. These include: RDNA4 Graphics Support: The kernel now includes baseline support for AMD's upcoming RDNA4 graphics architecture. This early integration bodes well for future AMD GPU releases, ensuring Linux users have day-one support. Core Performance Boost: The AMD P-State driver now includes handling for AMD Core Performance Boost. This driver gives AMD Core users more granular control over turbo and boost frequency ranges. Fast Collaborative Processor Performance Control (CPPC) Support: Overclockers who want the most power possible from their computers will be happy with this improvement to the AMD P-State driver. This feature enhances power efficiency on recent Ryzen (Zen 4) mobile processors. This can improve performance by 2-6% without increasing power consumption. AES-GCM Crypto Performance: AMD and Intel CPUs benefit from significantly faster AES-GCM encryption and decryption processing, up to 160% faster than previous versions.

Read more of this story at Slashdot.

Linux Developer Swatted and Handcuffed During Live Video Stream [Slashdot: Linux]

Last October Slashdot reported on René Rebe's discovery of a random illegal instruction speculation bug on AMD Ryzen 7000-series and Epyc Zen 4 CPUs — which Rebe discussed on his YouTube channel. But this week's YouTube episode had a different ending, reports Tom's Hardware... Two days ago, tech streamer and host of Code Therapy René Rebe was streaming one of many T2 Linux (his own custom distribution) development sessions from his office in Germany when he abruptly had to remove his microphone and walk off camera due to the arrival of police officers. The officers subsequently cuffed him and took him to the station for an hour of questioning, a span of time during which the stream continued to run until he made it back... [T]he police seemingly have no idea who did it and acted based on a tip sent with an email. Finding the perpetrators could take a while, and options will be fairly limited if they don't also live in Germany. Rebe has been contributing to Linux "since as early as 1998," according to the article, "and started his own T2 SD3 Embedded Linux distribution in 2004, as well." (And he's also a contributor to many other major open source projects.) The article points out that Linux and other communities "are compelled by little-to-no profit motive, so in essence, René has been providing unpaid software development for the greater good for the past two decades."

Read more of this story at Slashdot.

A Simple Guide to Data Visualization on Ubuntu for Beginners [Linux Journal - The Original Magazine of the Linux Community]

A Simple Guide to Data Visualization on Ubuntu for Beginners

Data visualization is not just an art form but a crucial tool in the modern data analyst's arsenal, offering a compelling way to present, explore, and understand large datasets. In the context of Ubuntu, one of the most popular Linux distributions, leveraging the power of data visualization tools can transform complex data into insightful, understandable visual narratives. This guide delves deep into the art and science of data visualization within Ubuntu, providing users with the knowledge to not only create but also optimize and innovate their data presentations.

Introduction to Data Visualization in Ubuntu

Ubuntu, known for its stability and robust community support, serves as an ideal platform for data scientists and visualization experts. The versatility of Ubuntu allows for the integration of a plethora of data visualization tools, ranging from simple plotting libraries to complex interactive visualization platforms. The essence of data visualization lies in its ability to turn abstract numbers into visual objects that the human brain can interpret much faster and more effectively than raw data.

Setting Up the Visualization Environment

Before diving into the creation of stunning graphics and plots, it's essential to set up your Ubuntu system for data visualization. Here's how you can prepare your environment:

System Requirements
  • A minimum of 4GB RAM is recommended, though 8GB or more is preferable for handling larger datasets.
  • At least 10GB of free disk space to install various tools and store datasets.
  • A processor with good computational capabilities (Intel i5 or better) ensures smooth processing of data visualizations.
Installing Necessary Software
  • Python and R: Start by installing Python and R, two of the most powerful programming languages for data analysis and visualization. You can install Python using the command sudo apt install python3 and R using sudo apt install r-base.
  • Visualization Libraries: Install Python libraries such as Matplotlib (pip install matplotlib), Seaborn (pip install seaborn), and Plotly (pip install plotly), along with R packages like ggplot2 (install.packages("ggplot2")).
Optimizing Performance
  • Configure your Ubuntu system to use swap space effectively, especially if RAM is limited.
  • Regularly update your system and installed packages to ensure compatibility and performance enhancements.

Exploring Data Visualization Tools on Ubuntu

Several tools and libraries are available for Ubuntu users, each with unique features and capabilities:

Bridging the Gap: The First Enterprise-Grade Linux Solution for the Cloud-to-Edge Continuum [Linux Journal - The Original Magazine of the Linux Community]

Bridging the Gap: The First Enterprise-Grade Linux Solution for the Cloud-to-Edge Continuum

The Growing Demand for Specialized Linux Solutions

As the Linux market is set to soar to nearly USD 100 billion by 2032,businesses are facing mounting challenges in managing increasingly complex workloads spanning from the cloud to the edge. Traditional Linux distributions are not built to meet the specific demands of these modern use cases, creating an urgent need for a more specialized, enterprise-grade solution.

Historically, enterprises have depended on general-purpose Linux distributions operating across racked servers and hybrid data centers to centrally store and process their data. But with the rapid rise of edge computing and the Internet of Things (IoT), real-time data processing closer to the source has become mission-critical. Industries like healthcare, telecommunications, industrial automation, and defense now require localized, lightning-fast processing to make real-time decisions.

This shift to edge computing and connected IoT has sparked a surge of use cases that demand specialized solutions to address unique operational requirements such as size, performance, serviceability, and security. For instance, the telecommunications sector demands carrier-grade Linux (CGL) and edge vRAN solutions with reliability requirements exceeding 99.999% uptime.

Yet, traditional enterprise Linux distributions—while robust for central data centers—are too general to meet the diverse, exacting needs of IoT and edge environments. Linux offerings are continuing to expand beyond conventional distributions like Debian, Ubuntu, and Fedora, but the market lacks a unified platform that can effectively bridge the gap between edge and cloud workloads.

Today’s Complex Computing Needs Demand a Unified Solution

To stay competitive, businesses need computing solutions that process time-sensitive data at the edge, connect intelligent devices, and seamlessly share insights across cloud environments. But no single Linux provider has yet bridged the cloud-to-edge divide—until now.

Introducing eLxr Pro: One Seamless Solution for All Enterprise-Grade Workloads

Wind River® eLxr Pro breaks new ground as the industry’s first end-to-end Linux solution that connects enterprise-grade workloads from the cloud to the edge. By delivering unmatched commercial support for the open source eLxr project, Wind River has revolutionized how businesses manage critical workloads across distributed environments—unlocking new levels of efficiency and scalability.

As a founding member and leading contributor to the eLxr project, Wind River ensures the eLxr project’s enterprise-grade Debian-derivative distribution meets the evolving needs of mission-critical environments. This deep integration provides customers with unparalleled community influence and support, making Wind River the go-to provider for secure, reliable, enterprise-grade Linux deployments.

Why Ubuntu Secure Boot is Essential for Protecting Your Computer [Linux Journal - The Original Magazine of the Linux Community]

Why Ubuntu Secure Boot is Essential for Protecting Your Computer

Introduction

As our reliance on technology grows, so does the need for robust security measures that protect systems from unauthorized access and malicious attacks. One critical area of focus is the system's boot process, a vulnerable phase where malware, rootkits, and other threats can potentially infiltrate and compromise the entire operating system. This is where Secure Boot, a feature of the UEFI (Unified Extensible Firmware Interface), comes into play, providing a defense mechanism against unauthorized software being loaded during the boot process.

Ubuntu, one of the most widely used Linux distributions, implements Secure Boot as part of its strategy to protect user systems from threats. While Secure Boot has stirred some debate in the open-source community due to its reliance on cryptographic signatures, its value in ensuring system integrity is undeniable. In this article, we will explore what Secure Boot is, how Ubuntu implements it, and its role in enhancing system security.

Understanding Secure Boot

What is Secure Boot?

Secure Boot is a security standard developed by members of the PC industry to ensure that a device boots only using software that is trusted by the manufacturer. It is a feature of UEFI firmware, which has largely replaced the traditional BIOS in modern systems. The fundamental purpose of Secure Boot is to prevent unauthorized code—such as bootkits and rootkits—from being executed during the boot process, which could otherwise compromise the operating system at a low level.

By requiring that each piece of software involved in the boot process be signed with a trusted certificate, Secure Boot ensures that only authenticated and verified code can run. If an untrusted or unsigned bootloader or kernel is detected, the boot process will be halted to prevent any malicious software from being loaded.

How Secure Boot Works

At its core, Secure Boot operates by maintaining a database of trusted keys and signatures within the UEFI firmware. When the system is powered on, UEFI verifies the digital signature of the bootloader, typically GRUB in Linux systems, against these trusted keys. If the bootloader’s signature matches a known trusted key, UEFI proceeds to load the bootloader, which then continues with loading the operating system kernel. Each component in this chain must have a valid cryptographic signature; otherwise, the boot process is stopped.

If a system has Secure Boot enabled, it verifies the integrity of the kernel and modules as well. This adds another layer of security, ensuring that not only the bootloader but also the OS components are secure.

12-09-2024

17:41

LibreOffice 24.2.6 available for download, for the privacy-conscious user [Press Releases Archives - The Document Foundation Blog]

Berlin, 5 September 2024 – LibreOffice 24.2.6, the sixth minor release of the free, volunteer-supported office productivity suite for office environments and individuals, the best choice for privacy-conscious users and digital sovereignty, is available at https://www.libreoffice.org/download for Windows, macOS and Linux.

The release includes over 40 bug and regression fixes over LibreOffice 24.2.5 [1] to improve the stability and robustness of the software, as well as interoperability with legacy and proprietary document formats. LibreOffice 24.2.6 is aimed at mainstream users and enterprise production environments.

LibreOffice is the only office suite with a feature set comparable to the market leader, and offers a range of user interface options to suit all users, from traditional to modern Microsoft Office-style. The UI has been developed to make the most of different screen form factors by optimizing the space available on the desktop to put the maximum number of features just a click or two away.

LibreOffice for Enterprises

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a range of dedicated value-added features, long term support and other benefits such as SLAs: https://www.libreoffice.org/download/libreoffice-in-business/.

Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and contributes to the improvement of the LibreOffice Technology platform.

Availability of LibreOffice 24.2.6

LibreOffice 24.2.6 is available at https://www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Windows 7 SP1 and macOS 10.15. Products based on LibreOffice Technology for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/.

Next week, power users and technology enthusiasts will be able to download LibreOffice 24.8.1, the first minor release of the recently announced new version with many bug and regression fixes. A summary of the new features of the LibreOffice 24.8 ifamily s available on this blog post: https://blog.documentfoundation.org/blog/2024/08/22/libreoffice-248/.

End users looking for support will be helped by the immediate availability of the LibreOffice 24.8 Getting Started Guide, which is available for download from the following link: https://books.libreoffice.org/. In addition, they will be able to get first-level technical support from volunteers on user mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org.

LibreOffice users, free software advocates and community members can support the Document Foundation by making a donation at https://www.libreoffice.org/donate.

[1] Fixes in RC1: https://wiki.documentfoundation.org/Releases/24.2.6/RC1. Fixes in RC2: https://wiki.documentfoundation.org/Releases/24.2.6/RC2.

LibreOffice 24.8, for the privacy-conscious office suite user [Press Releases Archives - The Document Foundation Blog]

The new major release provides a wealth of new features, plus a large number of interoperability improvements

Berlin, 22 August 2024 – LibreOffice 24.8, the new major release of the free, volunteer-supported office suite for Windows (Intel, AMD and ARM), macOS (Apple and Intel) and Linux is available from our download page. This is the second major release to use the new calendar-based numbering scheme (YY.M), and the first to provide an official package for Windows PCs based on ARM processors.

LibreOffice is the only office suite, or if you prefer, the only software for creating documents that may contain personal or confidential information, that respects the privacy of the user – thus ensuring that the user is able to decide if and with whom to share the content they have created. As such, LibreOffice is the best option for the privacy-conscious office suite user, and provides a feature set comparable to the leading product on the market. It also offers a range of interface options to suit different user habits, from traditional to contemporary, and makes the most of different screen sizes by optimising the space available on the desktop to put the maximum number of features just a click or two away.

The biggest advantage over competing products is the LibreOffice Technology engine, the single software platform on which desktop, mobile and cloud versions of LibreOffice – including those provided by ecosystem companies – are based. This allows LibreOffice to offer a better user experience and to produce identical and perfectly interoperable documents based on the two available ISO standards: the Open Document Format (ODT, ODS and ODP), and the proprietary Microsoft OOXML (DOCX, XLSX and PPTX). The latter hides a large amount of artificial complexity, which may create problems for users who are confident that they are using a true open standard.

End users looking for support will be helped by the immediate availability of the LibreOffice 24.8 Getting Started Guide, which is available for download from the Bookshelf. In addition, they will be able to get first-level technical support from volunteers on user mailing lists and the Ask LibreOffice website.

New Features of LibreOffice 24.8

PRIVACY

  • If the option Tools ▸ Options ▸ LibreOffice ▸ Security ▸ Options ▸ Remove personal information on saving is enabled, then personal information will not be exported (author names and timestamps, editing duration, printer name and config, document template, author and date for comments and tracked changes)

WRITER

  • UI: handling of formatting characters, width of comments panel, selection of bullets, new dialog for hyperlinks, new Find deck in the sidebar
  • Navigator: adding cross-references by drag-and-drop items, deleting footnotes and endnotes, indicating images with broken links
  • Hyphenation: exclude words from hyphenation with new contextual menu and visualization, new hyphenation across columns, pages or spreads, hyphenation between constituents of a compound word

CALC

  • Addition of FILTER, LET, RANDARRAY, SEQUENCE, SORT, SORTBY, UNIQUE, XLOOKUP and XMATCH functions
  • Improvement of threaded calculation performance, optimization of redraw after a cell change by minimizing the area that needs to be refreshed
  • Cell focus rectangle moved apart from cell content
  • Comments can be edited and deleted from the Navigator’s right-click menu

IMPRESS & DRAW

  • In Normal view, it is now possible to scroll between slides, and the Notes are available as a collapsible pane under the slide
  • By default, the running Slideshow is now immediately updated when applying changes in EditView or in PresenterConsole, even on different Screens

CHART

  • New chart types “Pie-of-Pie” and “Bar-of-Pie” break down a slice of a pie as a pie or bar sub-chart respectively (this also enables import of such charts from OOXML files created with Microsoft Office)
  • Text inside chart’s titles, text boxes and shapes (and parts thereof) can now be formatted using the Character dialog

ACCESSIBILITY

  • Several improvements to the management of formatting options, which can be now announced properly by screen readers

SECURITY

  • New mode of password-based ODF encryption

INTEROPERABILITY

  • Support importing and exporting OOXML pivot table (cell) format definitions
  • PPTX files with heavy use of custom shapes now open faster

A video showcasing the most significant new features is available on YouTube and PeerTube.

Contributors to LibreOffice 24.8

There are 171 contributors to the new features of LibreOffice 24.8: 57% of code commits come from the 49 developers employed by companies on TDF’s Advisory Board – Collabora, allotropia and Red Hat – and other organisations, another 20% from seven developers at The Document Foundation, and the remaining 23% from 115 individual volunteer developers.

An additional 188 volunteers have committed localized strings in 160 languages, representing hundreds of people actually providing translations. LibreOffice 24.8 is available in 120 languages, more than any other desktop software, making it available to over 5.5 billion people in their native language. In addition, over 2.4 billion people speak one of these 120 languages as a second language (L2).

LibreOffice for Enterprises

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: LibreOffice in Business.

Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and improves the LibreOffice Technology platform. Products based on LibreOffice Technology are available for all major desktop operating systems (Windows, macOS, Linux and ChromeOS), mobile platforms (Android and iOS) and the cloud.

Migrations to LibreOffice

The Document Foundation has developed a migration protocol to help companies move from proprietary office suites to LibreOffice, based on the deployment of an LTS (long-term support) enterprise-optimised version of LibreOffice plus migration consulting and training provided by certified professionals who offer value-added solutions consistent with proprietary offerings. Reference: professional support page.

In fact, LibreOffice’s mature code base, rich feature set, strong support for open standards, excellent compatibility and LTS options from certified partners make it the ideal solution for organisations looking to regain control of their data and break free from vendor lock-in.

Availability of LibreOffice 24.8

LibreOffice 24.8 is available on our download page. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 [1] and Apple MacOS 10.15. LibreOffice Technology-based products for Android and iOS are listed on this page.

For users who don’t need the latest features and prefer a version that has undergone more testing and bug fixing, The Document Foundation maintains the LibreOffice 24.2 family, which includes several months of back-ported fixes. The current release is LibreOffice 24.2.5.

LibreOffice users, free software advocates and community members can support The Document Foundation with a donation on our donate page.

[1] This does not mean that The Document Foundation suggests the use of this operating system, which is no longer supported by Microsoft itself, and as such should not be used for security reasons.

Release Notes: wiki.documentfoundation.org/ReleaseNotes/24.8

Press Kit with Images: nextcloud.documentfoundation.org/s/JEe8MkDZWMmAGmS

11-09-2024

09:42

How Linux Shapes Modern Cloud Computing [Linux Journal - The Original Magazine of the Linux Community]

How Linux Shapes Modern Cloud Computing

Introduction

Cloud computing has transformed the way businesses and individuals store, manage, and process data. At its core, cloud computing refers to the on-demand availability of computing resources—such as storage, processing power, and applications—over the internet, eliminating the need for local infrastructure. With scalability, flexibility, and cost efficiency as its hallmarks, cloud computing has become an essential element in the digital landscape.

While cloud computing can be run on various operating systems, Linux has emerged as the backbone of the majority of cloud infrastructures. Whether powering public cloud services like Amazon Web Services (AWS), Google Cloud Platform (GCP), or private clouds used by enterprises, Linux provides the performance, security, and flexibility required for cloud operations. This article delves into why Linux has become synonymous with cloud computing, its key roles in various cloud models, and the future of Linux in this ever-evolving field.

Why Linux is Integral to Cloud Computing

Open Source Nature

One of the primary reasons Linux is so deeply integrated into cloud computing is its open source nature. Linux is free to use, modify, and distribute, which makes it attractive for businesses and cloud service providers alike. Companies are not locked into restrictive licensing agreements and are free to tailor Linux to their specific needs, an advantage not easily found in proprietary systems like Windows.

The open source nature of Linux also fosters collaboration. Thousands of developers continuously improve Linux, making it more secure, efficient, and feature-rich. For cloud computing, where innovation is key, this continuous improvement ensures that Linux remains adaptable to the latest technological advances.

Performance and Stability

In cloud environments, performance and uptime are critical. Any downtime or inefficiency can have a ripple effect, causing disruptions for businesses and users. Linux is renowned for its stability and high performance under heavy workloads. Its efficient handling of system resources—such as CPU and memory management—enables cloud providers to maximize performance and minimize costs. Additionally, Linux’s stability ensures that systems run smoothly without frequent crashes or the need for constant reboots, a crucial factor in maintaining high availability for cloud services.

09-09-2024

13:28

Unlocking the Secrets of Writing Custom Linux Kernel Drivers for Smooth Hardware Integration [Linux Journal - The Original Magazine of the Linux Community]

Unlocking the Secrets of Writing Custom Linux Kernel Drivers for Smooth Hardware Integration

Introduction

Kernel drivers are the bridge between the Linux operating system and the hardware components of a computer. They play a crucial role in managing and facilitating communication between the OS and various hardware devices, such as network cards, storage devices, and more. Writing custom kernel drivers allows developers to interface with new or proprietary hardware, optimize performance, and gain deeper control over system resources.

In this article, we will explore the intricate process of writing custom Linux kernel drivers for hardware interaction. We'll cover the essentials, from setting up your development environment to advanced topics like debugging and performance optimization. By the end, you'll have a thorough understanding of how to create a functional and efficient driver for your hardware.

Prerequisites

Before diving into driver development, it's important to have a foundational knowledge of Linux, programming, and kernel development. Here’s what you need to know:

Basic Linux Knowledge

Familiarity with Linux commands, file systems, and system architecture is essential. You'll need to navigate through directories, manage files, and understand how the Linux OS functions at a high level.

Programming Skills

Kernel drivers are primarily written in C. Understanding C programming and low-level system programming concepts are crucial for writing effective drivers. Knowledge of data structures, memory management, and system calls will be particularly useful.

Kernel Development Basics

Understanding the difference between kernel space and user space is fundamental. Kernel space is where drivers and the core of the operating system run, while user space is where applications operate. Familiarize yourself with kernel modules, which are pieces of code that can be loaded into the kernel at runtime.

Setting Up the Development Environment

Having a properly configured development environment is key to successful kernel driver development. Here’s how to get started:

Linux Distribution and Tools

Choose a Linux distribution that suits your needs. Popular choices for kernel development include Ubuntu, Fedora, and Debian. Install essential development tools, including:

  • GCC: The GNU Compiler Collection, which includes the C compiler.
  • Make: A build automation tool.
  • Kernel Headers: Necessary for compiling kernel modules.

You can install these tools using your package manager. For example, on Ubuntu, you can use:

sudo apt-get install build-essential sudo apt-get install linux-headers-$(uname -r)

Linux Filesystem Hierarchy: Your Guide to Understanding Its Layout [Linux Journal - The Original Magazine of the Linux Community]

Linux Filesystem Hierarchy: Your Guide to Understanding Its Layout

Introduction

Navigating the Linux filesystem hierarchy can be a daunting task for newcomers and even seasoned administrators. Unlike some other operating systems, Linux follows a unique directory structure that is both systematic and crucial for system management and operation. Understanding this structure is essential for efficient system administration, troubleshooting, and software management. In this article, we’ll dive deep into the Linux filesystem hierarchy, exploring each directory's purpose and significance.

The Root Directory (/)

At the pinnacle of the Linux filesystem hierarchy is the root directory, denoted by a single forward slash (/). This directory is the starting point from which all other directories branch out. Think of it as the base of a tree, with all other directories extending from it.

The root directory is essential for the operating system’s overall structure, providing the foundation upon which the entire filesystem is built. All files and directories, regardless of their location, can ultimately be traced back to the root directory.

Key Directories and Their Purposes

Understanding the primary directories within the Linux filesystem is crucial for effective navigation and management. Here’s a detailed look at each significant directory:

  • /bin

    • Purpose: The /bin directory houses essential binary executables that are necessary for the system to function correctly, even in single-user mode. These binaries are crucial for basic system operations and recovery.
    • Examples: Common commands found here include ls (list directory contents), cp (copy files), and rm (remove files). These utilities are used by both system administrators and regular users.
  • /sbin

    • Purpose: Similar to /bin, the /sbin directory contains system binaries, but these are primarily administrative commands used for system maintenance and configuration. These binaries are typically used by the root user or system administrators.
    • Examples: Commands such as fsck (filesystem check), reboot (reboot the system), and ifconfig (network interface configuration) are located here.
  • /etc

04-09-2024

18:29

Rust for Linux Maintainer Steps Down in Frustration With 'Nontechnical Nonsense' [Slashdot: Linux]

Efforts to add Rust code to the Linux kernel has suffered a setback as one of the maintainers of the Rust for Linux project has stepped down -- citing frustration with "nontechnical nonsense," according to The Register: Wedson Almeida Filho, a software engineer at Microsoft who has overseen the Rust for Linux project, announced his resignation in a message to the Linux kernel development mailing list. "I am retiring from the project," Filho declared. "After almost four years, I find myself lacking the energy and enthusiasm I once had to respond to some of the nontechnical nonsense, so it's best to leave it up to those who still have it in them." [...] Memory safety bugs are regularly cited as the major source of serious software vulnerabilities by organizations overseeing large projects written in C and C++. So in recent years there's been a concerted push from large developers like Microsoft and Google, as well as from government entities like the US Cybersecurity and Infrastructure Security Agency, to use memory-safe programming languages -- among them Rust. Discussions about adding Rust to Linux date back to 2020 and were realized in late 2022 with the release of Linux 6.1. "I truly believe the future of kernels is with memory-safe languages," Filho's note continued. "I am no visionary but if Linux doesn't internalize this, I'm afraid some other kernel will do to it what it did to Unix."

Read more of this story at Slashdot.

31-08-2024

18:41

Linux 6.12 To Optionally Display A QR Code During Kernel Panics [Slashdot: Linux]

New submitter meisdug writes: A new feature has been submitted for inclusion in Linux 6.12, allowing the display of a QR code when a kernel panic occurs using the DRM Panic handler. This QR code can capture detailed error information that is often missed in traditional text-based panic messages, making it more user-friendly. The feature, written in Rust, is optional and can be enabled via a specific build switch. This implementation follows similar ideas from other operating systems and earlier discussions in the Linux community.

Read more of this story at Slashdot.

30-08-2024

17:54

EmuDeck Enters the Mini PC Market With Linux-Powered 'EmuDeck Machines' [Slashdot: Linux]

An anonymous reader quotes a report from overkill.wtf: The team behind popular emulation tool EmuDeck is today announcing something rather special: they've spent the first half of 2024 working on their very first hardware product, called the EmuDeck Machine, and it's due to arrive before the year is out. This EmuDeck Machine is an upcoming, crowdfunded, retro emulation mini PC running Bazzite, a Linux-based system similar to SteamOS. [...] This new EmuDeck Machine comes in two variants, the EM1 running an Intel N97 APU, and the EM2 -- based on an AMD Ryzen 8600G. While both machines are meant as emulation-first devices, the AMD-based variant can easily function as a console-like PC. This is also thanks to some custom work done by the team: "We've optimized the system for maximum power. The default configuration of an 8600G gets you 32 FPS in Cyberpunk; we've managed to reach 47 FPS with a completely stable system, or 60FPS if you use FSR." Both machines will ship with a Gamesir Nova Lite controller and EmuDeck preinstalled naturally. The team has also preinstalled all available Decky plugins. But that's not all: if the campaign is successful, the EmuDeck team will also work on a docking station for the EM2 that will upgrade the graphics to an AMD Radeon 7600 desktop GPU. With this, in games like Cyberpunk 2077, you'll be able to reach 160 FPS in 1080p as per EmuDeck's measurements. You can preorder the EmuDeck Machines via Indigogo, starting at $322 and shipping in December.

Read more of this story at Slashdot.

29-08-2024

18:35

'Uncertainty' Drives LinkedIn To Migrate From CentOS To Azure Linux [Slashdot: Linux]

The Register's Liam Proven reports: Microsoft's in-house professional networking site is moving to Microsoft's in-house Linux. This could mean that big changes are coming for the former CBL-Mariner distro. Ievgen Priadka's post on the LinkedIn Engineering blog, titled Navigating the transition: adopting Azure Linux as LinkedIn's operating system, is the visible sign of what we suspect has been a massive internal engineering effort. It describes some of the changes needed to migrate what the post calls "most of our fleet" from the end-of-life CentOS 7 to Microsoft Azure Linux -- the distro that grew out of and replaced its previous internal distro, CBL-Mariner. This is an important stage in a long process. Microsoft acquired LinkedIn way back in 2016. Even so, as recently as the end of last year, we reported that a move to Azure had been abandoned, which came a few months after it laid off almost 700 LinkedIn staff -- the majority in R&D. The blog post is over 3,500 words long, so there's quite a lot to chew on -- and we're certain that this has been passed through and approved by numerous marketing and management people and scoured of any potentially embarrassing admissions. Some interesting nuggets remain, though. We enjoyed the modest comment that: "However, with the shift to CentOS Stream, users felt uncertain about the project's direction and the timeline for updates. This uncertainty created some concerns about the reliability and support of CentOS as an operating system." [...] There are some interesting technical details in the post too. It seems LinkedIn is running on XFS -- also the RHEL default file system, of course -- with the notable exception of Hadoop, and so the Azure Linux team had to add XFS support. Some CentOS and actual RHEL is still used in there somewhere. That fits perfectly with using any of the RHELatives. However, the post also mentions that the team developed a tool to aid with deploying via MaaS, which it explicitly defines as Metal as a Service. MaaS is a Canonical service, although it does support other distros -- so as well as CentOS, there may have been some Ubuntu in the LinkedIn stack as well. Some details hint at what we suspect were probably major deployment headaches. [...] Some of the other information covers things the teams did not do, which is equally informative. [...]

Read more of this story at Slashdot.

22-08-2024

17:51

Announcement of LibreOffice 24.2.5 Community, optimized for the privacy-conscious user [Press Releases Archives - The Document Foundation Blog]

Berlin, 11 July 2024 – LibreOffice 24.2.5 Community, the fifth minor release of the free, volunteer-supported office productivity suite for office environments and individuals, the best choice for privacy-conscious users and digital sovereignty, is available at www.libreoffice.org/download for Windows, macOS and Linux.

The release includes more than 70 bug and regression fixes over LibreOffice 24.2.4 [1] to improve the stability and robustness of the software, as well as interoperability with legacy and proprietary document formats. LibreOffice 24.2.5 Community is the most advanced version of the office suite and is aimed at power users but can be used safely in other environments.

LibreOffice is the only office suite with a feature set comparable to the market leader. It also offers a range of interface options to suit all users, from traditional to modern Microsoft Office-style, and makes the most of different screen form factors by optimising the space available on the desktop to put the maximum number of features just a click or two away.

LibreOffice for Enterprises

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a range of dedicated value-added features, long term support and other benefits such as SLAs: www.libreoffice.org/download/libreoffice-in-business/

Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and contributes to the improvement of the LibreOffice Technology platform. All products based on that platform share the same approach, optimised for the privacy-conscious user.

Availability of LibreOffice 24.2.5 Community

LibreOffice 24.2.5 Community is available at www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple macOS 10.15. Products based on LibreOffice Technology for Android and iOS are listed here: www.libreoffice.org/download/android-and-ios/

For users who don’t need the latest features and prefer a version that has undergone more testing and bug fixing, The Document Foundation maintains a version with some months of back-ported fixes. The current release has reached the end of life, so users should update to LibreOffice 24.2.5 when the new major release LibreOffice 24.8 becomes available in August.

The Document Foundation does not provide technical support for users, although they can get it from volunteers on user mailing lists and the Ask LibreOffice website: ask.libreoffice.org

LibreOffice users, free software advocates and community members can support the Document Foundation by making a donation at www.libreoffice.org/donate

[1] Fixes in RC1: wiki.documentfoundation.org/Releases/24.2.5/RC1. Fixes in RC2: wiki.documentfoundation.org/Releases/24.2.5/RC2.

LibreOffice 24.2.4 Community available for download [Press Releases Archives - The Document Foundation Blog]

Berlin, 6 June 2024 – LibreOffice 24.2.4 Community, the fourth minor release of the free, volunteer-supported office suite for personal productivity in office environments, is now available at https://www.libreoffice.org/download for Windows, MacOS and Linux.

The release includes over 70 bug and regression fixes over LibreOffice 24.2.3 [1] to improve the stability and robustness of the software. LibreOffice 24.2.4 Community is the most advanced version of the office suite, offering the best features and interoperability with Microsoft Office proprietary formats.

LibreOffice is the only office suite with a feature set comparable to the market leader. It also offers a range of interface options to suit all user habits, from traditional to modern, and makes the most of different screen form factors by optimising the space available on the desktop to put the maximum number of features just a click or two away.

LibreOffice for Enterprises

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: https://www.libreoffice.org/download/libreoffice-in-business/

Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and contributes to the improvement of the LibreOffice Technology platform.

Availability of LibreOffice 24.2.4 Community

LibreOffice 24.2.4 Community is available at https://www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple MacOS 10.15. Products based on LibreOffice Technology for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/

For users who don’t need the latest features and prefer a version that has undergone more testing and bug fixing, The Document Foundation maintains the LibreOffice 7.6 family, which includes several months of back-ported fixes. The current release is LibreOffice 7.6.7 Community, but it will soon be replaced exactly by LibreOffice 24.2.4 when the new major release LibreOffice 24.8 becomes available.

The Document Foundation does not provide technical support for users, although they can get it from volunteers on user mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org

LibreOffice users, free software advocates and community members can support the Document Foundation by making a donation at https://www.libreoffice.org/donate.

[1] Fixes in RC1: https://wiki.documentfoundation.org/Releases/24.2.4/RC1. Fixes in RC2: https://wiki.documentfoundation.org/Releases/24.2.4/RC2.

LibreOffice 7.6.7 for productivity environments [Press Releases Archives - The Document Foundation Blog]

Berlin, May 10, 2024 – LibreOffice 7.6.7 Community, the last minor release of the 7.6 line, is available from https://www.libreoffice.org/download for Windows, macOS, and Linux. This is the most thoroughly tested version, for deployments by individuals, small and medium businesses, and other organizations in productivity environments. This new minor release fixes bugs and regressions which can be looked up in the changelog [1].

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with many dedicated value-added features and other benefits such as SLA (Service Level Agreements): https://www.libreoffice.org/download/libreoffice-in-business/

Users can download LibreOffice 7.6.7 Community from the office suite website: https://www.libreoffice.org/download/. Minimum requirements are Microsoft Windows 7 SP1 and Apple macOS 10.14. LibreOffice Technology-based products for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/

The Document Foundation does not provide technical support for users, although they can be helped by volunteers on user mailing lists and on the Ask LibreOffice website: https://ask.libreoffice.org

LibreOffice users, free software advocates and community members can support The Document Foundation with a donation at https://www.libreoffice.org/donate

[1] Change log pages: https://wiki.documentfoundation.org/Releases/7.6.7/RC1 and https://wiki.documentfoundation.org/Releases/7.6.7/RC2

11-07-2024

19:13

Announcement of LibreOffice 24.2.3 Community [Press Releases Archives - The Document Foundation Blog]

Berlin, 2 May 2024 – LibreOffice 24.2.3 Community, the third minor release of the free, volunteer-supported office suite for personal productivity in office environments, is now available at https://www.libreoffice.org/download for Windows, macOS and Linux.

The release includes around 80 bug and regression fixes over LibreOffice 24.2.2 [1] to improve the stability and robustness of the software. LibreOffice 24.2.3 Community is the most advanced version of the office suite, offering the best features and interoperability with Microsoft Office proprietary formats.

LibreOffice is the only office suite with a feature set comparable to the market leader. It also offers a range of interface options to suit all user habits, from traditional to modern, and makes the most of different screen form factors by optimising the space available on the desktop to put the maximum number of features just a click or two away.

The most significant advantage of LibreOffice over other office suites is the LibreOffice Technology engine, a single software platform for all environments: desktop, cloud and mobile. This allows LibreOffice to provide a better user experience and produce identical, and interoperable, documents based on both ISO standards: Open Document Format (ODT, ODS and ODP) for users concerned about compatibility, resilience and digital sovereignty, and the proprietary Microsoft format(DOCX, XLSX and PPTX).

A full description of all the new features of the LibreOffice 24.2 major release line can be found in the release notes [2].

LibreOffice for Enterprises

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: https://www.libreoffice.org/download/libreoffice-in-business/

Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and contributes to the improvement of the LibreOffice Technology platform.

Availability of LibreOffice 24.2.3 Community

LibreOffice 24.2.3 Community is available at https://www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple macOS 10.15. Products based on LibreOffice Technology for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/

For users who don’t need the latest features and prefer a version that has undergone more testing and bug fixing, The Document Foundation maintains the LibreOffice 7.6 family, which includes several months of back-ported fixes. The current release is LibreOffice 7.6.6 Community.

The Document Foundation does not provide technical support for users, although they can get it from volunteers on user mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org

LibreOffice users, free software advocates and community members can support the Document Foundation by making a donation at https://www.libreoffice.org/donate

[1] Fixes in RC1: https://wiki.documentfoundation.org/Releases/24.2.3/RC1. Fixes in RC2: https://wiki.documentfoundation.org/Releases/24.2.3/RC2.

[2] Release Notes: https://wiki.documentfoundation.org/ReleaseNotes/24.2

06-06-2024

18:45

Joint release of LibreOffice 24.2.2 Community and LibreOffice 7.6.6 Community [Press Releases Archives - The Document Foundation Blog]

Berlin, 28 March 2024 – Today the Document Foundation releases LibreOffice 24.2.2 Community [1] and LibreOffice 7.6.6 Community [2], both minor releases that fix bugs and regressions to improve quality and interoperability for individual productivity.

Both versions are immediately available from https://www.libreoffice.org/download. All LibreOffice users are encouraged to update their current version as soon as possible to take advantage of improvements. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple MacOS 10.15.

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: https://www.libreoffice.org/download/libreoffice-in-business/.

The Document Foundation does not provide technical support to users, although it is available from volunteers on user mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org.

LibreOffice users, free software advocates and community members can support the Document Foundation by making a donation at https://www.libreoffice.org/donate.

[1] Change logs for LibreOffice 24.2.2 Community: https://wiki.documentfoundation.org/Releases/24.2.2/RC1 (release candidate 1) and https://wiki.documentfoundation.org/Releases/24.2.2/RC2 (release candidate 2).

[2] Change logs for LibreOffice 7.6.6 Community: https://wiki.documentfoundation.org/Releases/7.6.6/RC1 (release candidate 1) and https://wiki.documentfoundation.org/Releases/7.6.6/RC2 (release candidate 2).

10-05-2024

15:03

Announcement of LibreOffice 24.2.1 Community [Press Releases Archives - The Document Foundation Blog]

Berlin, 29 February 2024 – LibreOffice 24.2.1 Community, the first minor release of the free, volunteer-supported office suite for personal productivity in office environments, is now available at https://www.libreoffice.org/download for Windows, MacOS and Linux.

The release includes more than 100 bug and regression fixes over LibreOffice 24.2 [1] to improve the stability and robustness of the software. LibreOffice 24.2.1 Community is the most advanced version of the office suite, offering the best features and interoperability with Microsoft Office proprietary formats.

LibreOffice is the only office suite with a feature set comparable to the market leader. It also offers a range of interface options to suit all user habits, from traditional to modern, and makes the most of different screen form factors by optimising the space available on the desktop to put the maximum number of features just a click or two away.

Highlights of LibreOffice 24.2.1 Community

The main advantage of LibreOffice over other office suites is the LibreOffice Technology engine, a single software platform for all environments: desktop, cloud and mobile. This allows LibreOffice to provide a better user experience and produce identical – and interoperable – documents based on both ISO standards: Open Document Format (ODT, ODS and ODP) for users concerned about compatibility, resilience and digital sovereignty, and the proprietary Microsoft OOXML (DOCX, XLSX and PPTX).

Most notable new features in the LibreOffice 24.2 family:

GENERAL
• Save AutoRecovery information is enabled by default, and is always creating backup copies
• Fixed various NotebookBar options, with many menu improvements, better print preview support, proper reset of customised layout, and enhanced use of radio buttons
• The Insert Special Character drop-down list now displays a character description for the selected character (and in the tooltip when you hover over it)

WRITER
• “Legal” ordered list numbering: make a given list level use Arabic numbering for all its numeric portions
• Comments can now use styles, with the Comment paragraph style being the default
• Improved various aspects of multi-page floating table support: overlap control, borders and footnotes, nesting, wrap on all pages, and related UI improvements

CALC
• A new search field has been added to the Functions sidebar deck
• The scientific number format is now supported and saved in ODF
• Highlight the Row and Column corresponding to the active cell

IMPRESS & DRAW
• The handling of small caps has been implemented for Impress
• Moved Presenter Console and Remote control settings from Tools > Options > LibreOffice Impress to Slide Show > Slide Show Settings, with improved labelling and dialogue layout
• Several improvements and fixes to templates

ACCESSIBILITY
• Several significant improvements to the handling of mouse positions and the presentation of dialogue boxes via the Accessibility APIs, allowing screen readers to present them correctly
• Improved management of IAccessible2 roles and text/object attributes, allowing screen readers to present them correctly
• Status bars in dialogs are reported with the correct accessible role so that screen readers can find and report them appropriately, while checkboxes in dialogs can be toggled using the space bar

SECURITY
• The Save with Password dialogue box now has a password strength meter
• New password-based ODF encryption that performs better, hides metadata better, and is more resistant to tampering and brute force
• Clarification of the text in the options dialogue box around the macro security settings, so that it is clear exactly what is allowed and what is not

The LibreOffice 24.2 family offers a host of enhancements and new features aimed at users sharing documents with or migrating from MS Office, building on the advanced features of the LibreOffice Technology platform for personal productivity on the desktop, mobile and in the cloud.

A full description of all the new features can be found in the release notes [2].

LibreOffice for Enterprises

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: https://www.libreoffice.org/download/libreoffice-in-business/

Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and contributes to the improvement of the LibreOffice Technology platform.

Availability of LibreOffice 24.2.1 Community

LibreOffice 24.2.1 Community is available at https://www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple MacOS 10.15. Products based on LibreOffice Technology for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/

For users who don’t need the latest features and prefer a version that has undergone more testing and bug fixing, The Document Foundation maintains the LibreOffice 7.6 family, which includes several months of back-ported fixes. The current release is LibreOffice 7.6.5 Community.

The Document Foundation does not provide technical support for users, although they can get it from volunteers on user mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org

LibreOffice users, free software advocates and community members can support the Document Foundation by making a donation at https://www.libreoffice.org/donate.

[1] Fixes in RC1: https://wiki.documentfoundation.org/Releases/24.2.1/RC1. Fixes in RC2: https://wiki.documentfoundation.org/Releases/24.2.1/RC2.

[2] Release Notes: https://wiki.documentfoundation.org/ReleaseNotes/24.2

21-03-2024

19:41

LVM Logische volumen [linux blogs franz ulenaers]

LVM = Logical Volume Manager



Een partitie van het type = "Linux LVM" kan gebruikt worden voor logische volumen maar ook als "snapshot" !
Een snapshot kan een exact kopie zijn van een logische volume dat bevrozen is op een bepaald ogenblik : dit maakt het mogelijk om consistente backups te maken van logische volumen
terwijl de logische volumen in gebruik zijn !

Hoe installeren ?

    sudo apt-get install lvm2



Cre�er een fysisch volume voor een partitie

    commando = �pvcreate� partitie

      voorbeeld :

        partitie moet van het type = "Linux LVM" zijn !

        pvcreate /dev/sda5



cre�er een fysisch volume groep

    vgcreate vg_storage partitie

      voorbeeld

        vgcreate mijnvg /dev/sda5



voeg een logische volume toe in een volume groep

    lvcreate -L grootte_in_M/G -n logische_volume_naam volume_groep

      voorbeeld :

        lvcreate -L 30G -n mijnhome mijnvg



activeer een volume groep

    vgchange -a y naam_volume_groep

      voorbeeld :

        vgchange -a y mijnvg



Mijn fysische en logische volumen

    fysische volume

      pvcreate /dev/sda1

    fysische volume groep

      vgcreate mydell /dev/sda1

    logische volumen

      lvcreate -L 1G -n boot mydell

      lvcreate -L 100G -n data mydell

      lvcreate -L 50G -n home mydell

      lvcreate -L 50G -n root mydell

      lvcreate -L 1G swap mydell



Logische volume vergroten/verkleinen

    mijn home logische volume vergroten met 1 G

      lvextend -L +1G /dev/mapper/mydell-home

    let op een logische volume verkleinen kan leiden tot gegevens verlies indien er te weinig plaats is .... !

lvreduce -L -1G /dev/mapper/mydell-home



toon fysische volume

sudo pvs

    worden getoond : PV fysische volume , VG volume groep , Fmt formaat (normaal = lvm2) , Attr attribuut, Psize groote PV, PFree vtije plaats

      PV             VG       Fmt  Attr PSize      PFree

      /dev/sda6 mydell lvm2   a--  920,68g  500,63g

sudo pvs -a

sudo pvs /dev/sda6



Backup instellingen Logische volumen

    zie bijgeleverde script LVM_bkup



toon volume groep

    sudo vgs

VG       #PV #LV #SN  Attr    VSize     VFree

mydell    1       6       0    wz--n- 920,68g 500,63g



toon logische volume(n)

    sudo lvs

      LV            VG     Attr        LSize   Pool Origin Data% Meta% Move Log Cpy%Sync Convert

      boot       mydell -wi-ao---- 952,00m

      data       mydell -wi-ao---- 100,00g

      home      mydell -wi-ao----  93,13g

      mintroot mydell -wi-a----- 101,00g

      root        mydell -wi-ao----  94,06g

      swap       mydell -wi-ao----  30,93g



hoe een logische volume wegdoen ?

    een logische volume wegdoen kan enkel maar als de fysische volume niet actief is

      dit kan met het vgchange commando

        vgchange -a n mydell

    lvremove /dev//mijn_volumegroup/naam_logische-volume

      voorbeeld :

lvremove /dev/mydell/data





hoe een fysische volume wegdoen ?

vgreduce mydell /dev/sda1




Bijlagen: LVM_bkup (0.8 KLB)




hoe een stick mounten en umounten zonder root te zijn en met je eigen rwx rechten ! [linux blogs franz ulenaers]

Stick mounten zonder root

hoe usb stick mounten en umounten zonder root te zijn en met rwx rechten ?
---------------------------------------------------------------------------------------------------------
(hernoem iedere ulefr01 naar je eigen gebruikersnaam!)

label stick

  • gebruik het 'fatlabel' commando om een volumenaam of label toe te kennen dit als je een vfat bestandensysteem gebruikt op je usb-stick

  • gebruik het commando 'tune2fs' voor een ext2,3,4

    • om een volumenaam stick32GB te maken op je usb_stick doe je met het commando :

sudo tune2fs -L stick32GB /dev/sdc1

noot : gebruik voor /dev/sdc1 hier het juiste device !


maak het filesysteem op je stick clean

  • mogelijk na het mounten zie dmesg messages : Volume was not properly unmounted. Some data may be corrupt. Please run fsck.

    • gebruik de file system consistency check commando fsck om dit recht te zetten

      • doe een umount voordat je het commando fsck uitvoer ! (gebruik het juiste device !)

        • fsck /dev/sdc1

noot: gebruik voor /dev/sdc1 hier je device !


rechten zetten op mappen en bestanden van je stick

  • Steek je stick in een usb poort en umount je stick

sudo chown ulefr01:ulefr01 /media/ulefr01/ -R
  • zet acl op je ext2,3,4 stick (werkt niet op een vfat !)

setfacl -m u:ulefr01:rwx /media/ulefr01
  • met getfact kun je acl zien

getfacl /media/ulefr01
  • met het ls commando kun je het resultaat zien

ls /media/ulefr01 -dla

drwxrwx--- 5 ulefr01 ulefr01 4096 okt 1 18:40 /media/ulefr01

noot: indien de �+� aanwezig is dan is acl reeds aanwezig, zoals op volgende lijn :

drwxrwx---+ 5 ulefr01 ulefr01 4096 okt 1 18:40 /media/ulefr01


Mount stick

  • Steek je stick in een usb poort en kijk of mounten automatisch gebeurd

  • check rechten van bestaande bestanden en mappen op je stick

ls * -la

  • indien root of andere rechten reeds aanwezig , herzetten met volgend commando

sudo chown ulefr01:ulefr01 /media/ulefr01/stick32GB -R

Maak map voor ieder stick

  • cd /media/ulefr01

  • mkdir mmcblk16G stick32GB stick16gb


aanpassen /etc/fstab

  • voeg een lijn toe voor iedere stick

    • voorbeelden

LABEL=mmcblk16G /media/ulefr01/mmcblk16G ext4 user,exec,defaults,noatime,acl,noauto 0 0
LABEL=stick32GB /media/ulefr01/stick32GB ext4 user,exec,defaults,noatime,acl,noauto 0 0
LABEL=stick16gb /media/ulefr01/stick16gb vfat user,defaults,noauto 0 0


Check het volgende

  • het volgende moet nu mogelijk zijn : 

    • mount en umount zonder root te zijn

    •  noot : je kunt de umount niet doen als de mount gedaan is door root ! Indien dat het geval is dan moet je eerst de umount met root ; daarna de mount als gebruiker dan kun je ook de umount doen . 

    • zet een nieuw bestand op je stick zonder root te zijn

    • zet een nieuw map op je stick zonder root te zijn

  • check of je nieuwe bestanden kunt aanmaken zonder root te zijn

        • touch test

        • ls test -la

        • rm test


Zet acl list [linux blogs franz ulenaers]

setfacl

noot: meestal mogelijk op linux bestandsystemen : btrfs, ext2, ext3, ext4 en Reiserfs  !

  • Hoe een acl zetten voor ��n gebruiker ?

setfacl -m u:ulefr01:rwx /home/ulefr01

noot: kies ipv ulefr01 hier je eigen gebruikersnaam

  • Hoe een acl afzetten ?

setfacl -x u:ulefr01 /home/ulefr01
  • Hoe een acl zetten voor twee of meer gebruikers ?

setfacl -m u:ulefr01:rwx /home/ulefr01

setfacl -m u:myriam:r-x /home/ulefr01

noot: kies ipv myriam je tweede gebruikersnaam; hier heeft myriam geen w write toegang maar wel r read en x exec !

  • Hoe een lijst opvragen van de ingestelde acl ?

getfacl home/ulefr01
getfacl: Voorafgaande '/' in absolute padnamen worden verwijderd
# file: home/ulefr01
# owner: ulefr01
# group: ulefr01
user::rwx
user:ulefr01:rwx
user:myriam:r-x 
group::---
mask::rwx
other::--- 
  • Hoe het resultaat nakijken ?

getfacl home/ulefr01
 zie hierboven
ls /home/ulefr01 -dla
drwxrwx---+  ulefr01 ulefr01 4096 okt 1 18:40  /home/ulefr01

zie + sign !


Het beste bestandensysteem (meest performant) op een USB stick , hoe opzetten ? [linux blogs franz ulenaers]

het beste bestandensysteem op een USB stick, hoe opzetten ?

het beste bestandensysteem (meest performant) is ext4

  • hoe opzetten ?

mkfs.ext4 $device
  • zet eerst journal af

tune2fs -O ^has_journal $device
  • doe journaling alleen met data_writeback

tune2fs -o journal_data_writeback $device
  • gebruik geen reserved spaces en zet het op nul.

tune2fs -m 0 $device


  • voor bovenstaande 3 acties kan bijgeleverde bash script gebruikt worden :



bestand USBperf

# USBperfext4


echo 'USBperf'

echo '--------'

echo 'ext4 device ?'

read device

echo "device= $device"

echo 'ok ?'

read ok

if [ $ok == ' ' ] || [ $ok == 'n' ] || [ $ok == 'N' ]

then

   echo 'nok - dus stoppen'

   exit 1

fi

echo "doe : no journaling ! tune2fs -O ^has_journal $device"

tune2fs -O ^has_journal $device

echo "use data mode for filesystem as writeback doe : tune2fs -o journal_data $device"

tune2fs -o journal_data_writeback $device

echo "disable reserved space "

tune2fs -m 0 $device

echo 'gedaan !'

read ok

echo "device= $device" 

exit 0


  • pas bestand /etc/fstab aan voor je USB

    • gebruik optie �noatime�

Encryptie [linux blogs franz ulenaers]

Met encryptie kan men de gegevens op je computer beveiligen, door de gegevens onleesbaar maken voor de buitenwereld !

Hoe kan men een bestandssysteem encrypteren ?

installeer de volgende open source pakketten :

    loop-aes-utils en cryptsetup

            apt-get install loop-aes-utils

            apt-get install cryptsetup

        modprobe cryptoloop
        voeg de volgende modules toe in je /etc/modules :
            aes
            dm_mod
           
dm_crypt
           
cryptoloop

Hoe een beveiligd bestandsysteem aanmaken ?

  1. dd if=/dev/zero of=/home/cryptfile bs=1M count=650
hiermee cre�er je een bestand van 650 M groot
  1. losetup -e aes /dev/loop0 /home/cryptfile
hierna wordt een paswoord gevraagd van minstens 20 karakters
  1. mkfs.ext3 /dev/loop0
maakt een ext3 bestandssysteem met journaling
  1. mkdir /mnt/crypt
                maakt een lege directory aan
  1. mount /dev/loop0 /mnt/crypt -t ext3
nu hebt je een bestandssysteem onder /mnt/crypt ter beschikking

....

Je kunt automatisch je bestandssysteem beschikbaar maken door een volgende entry in je /etc/fstab :

/home/cryptfile /mnt/crypt ext3 auto,encryption=aes,user,exec 0 0

....

Je kunt je encryptie afzetten dmv.

umount /mnt/crypt


losetup -d /dev/loop0        (dit is niet meer nodig als je de volgende entry in jet /etc/fstab hebt :
                /home/cryptfile /mnt/crypt ext3 auto,encryption=aes,exec 0 0
....
Manueel mounten kun je met :
  • losetup -e aes /dev/loop0 /home/cryptfile
 er wordt gevraagd een paswoord van minstens 20 karakters in te vullen
indien het paswoord verkeerd is dan krijg je de volgende melding :
        mount: wrong fs type, bad option, bad superblock on /dev/loop0,
        or too many mounted file systems
        ..
  • mount /dev/loop0 /mnt/crypt -t ext3
hiermee kunt je het bestandssysteem mounten


14-03-2024

19:45

App Launchers for Ubuntu 19.04 [Tech Drive-in]

During the transition period, when GNOME Shell and Unity were pretty rough around the edges and slow to respond, 3rd party app launchers were a big deal. Overtime the newer desktop environments improved and became fast, reliable and predictable, reducing the need for a alternate app launchers.


As a result, many third-party app launchers have either slowed down development or simply seized to exist. Ulauncher seems to be the only one to have bucked the trend so far. Synpase and Kupfer on the other hand, though old and not as actively developed anymore, still pack a punch. Since Kupfer is too old school, we'll only be discussing Synapse and Ulauncher here.

Synapse

I still remember the excitement when I first reviewed Synapse more than 8 years ago. Back then, Synapse was something very unique to Linux and Ubuntu, and it still is in many ways. Though Synapse is not an active project that it used to be, the launcher still works great even in brand new Ubuntu 19.04.

synapse ubuntu 19.04
 
No need to meddle with PPAs and DEBs, Synapse is available in Ubuntu Software Center.

ulauncher ubuntu 19.04 disco
 
CLICK HERE to directly find and install Synapse from Ubuntu Software Center, or simply search 'Synapse' in USC. Launch the app afterwards. Once launched, you can trigger Synapse with Ctrl+Space keyboard shortcut.

Ulauncher

The new kid in the block apparently. But new doesn't mean it is lacking in any way. What makes Ulauncher quite unique are its extensions. And there is plenty to choose from.

ulauncher ubuntu 19.04

From an extension that lets you control your Spotify desktop app, to generic unit converters or simply timers, Ulauncher extesions has got you covered.

Let's install the app first. Download the DEB file for Debian/Ubuntu users and double-click the downloaded file to install it. To complete the installation via Terminal instead, do this:

OR

sudo dpkg -i ~/Downloads/ulauncher_4.3.2.r8_all.deb

Change filename/location if they are different in your case. And if the command reports dependency errors, make a force install using the command below.

sudo apt-get install -f

Done. Post install, launch the app from your app-list and you're good to go. Once started, Ulauncher will sit in your system tray by default. And just like Synapse, Ctrl+Space will trigger Ulauncher.


Installing extensions in Ulauncher is pretty straight forward too.


Find the extensions you want from Ulauncher Extensions page. Trigger a Ulauncher instance with Ctrl+Space and go to Settings > Extensions > Add extension. Provide the URL from the extension page and let the app do the rest.

A Standalone Video Player for Netflix, YouTube, Twitch on Ubuntu 19.04 [Tech Drive-in]

Snap apps are a godsend. ElectronPlayer is an Electron based app available on Snapstore that doubles up as a standalone media player for video streaming services such as Netflix, YouTube, Twitch, Floatplane etc.

And it works great on Ubuntu 19.04 "disco dingo". From what we've tested, Netflix works like a charm, so does YouTube. ElectronPlayer also has a picture-in-picture mode that let it run above desktop and full screen applications.

netflix player ubuntu 19.04

For me, this is great because I can free-up tabs on my Firefox window which are almost never clutter-free.
OR

Use the command below to install ElectronPlayer directly from Snapstore. Open Terminal (Ctrl+Alt+t) and copy:

sudo snap install electronplayer

Press ENTER and give password when asked.

After the process is complete, search for ElectronPlayer in you App list. Sign in to your favorite video streaming services and you are good to go. Let us know your feedback in the comments.

Howto Upgrade to Ubuntu 19.04 from Ubuntu 18.10, Ubuntu 18.04 LTS [Tech Drive-in]

As most of you should know already, Ubuntu 19.04 "disco dingo" has been released. A lot of things have changed, see our comprehensive list of improvements in Ubuntu 19.04. Though it is not really necessary to make the jump, I'm sure many here would prefer to have the latest and greatest from Ubuntu. Here's how you upgrade to Ubuntu 19.04 from Ubuntu 18.10 and Ubuntu 18.04.

Upgrading to Ubuntu 19.04 from Ubuntu 18.04 LTS is tricky. There is no way you can make the jump from Ubuntu 18.04 LTS directly to Ubuntu 19.04. For that, you need to upgrade to Ubuntu 18.10 first. Pretty disappointing, I know. But when upgrading an entire OS, you can't be too careful.

And the process itself is not as tedious or time consuming à la Windows. And also unlike Windows, the upgrades are not forced upon you while you're in middle of something.

how to upgrade to ubuntu 19.04

If you wonder how the dock in the above screenshot rest at the bottom of Ubuntu desktop, it's called dash-to-dock GNOME Shell extension. That and more Ubuntu 19.04 tips and tricks here.

Upgrade to Ubuntu 19.04 from Ubuntu 18.10

Disclaimer: PLEASE backup your critical data before starting the upgrade process.

Let's start with the assumption that you're on Ubuntu 18.04 LTS.

After running the upgrade from Ubuntu 18.04 LTS from Ubuntu 18.10, the prompt will ask for a full system reboot. Please do that, and make sure everything is running smoothly afterwards. Now you have clean new Ubuntu 18.10 up and running. Let's begin the Ubuntu 19.04 upgrade process.
  • Make sure your laptop is plugged-in, this is going to take time. Stable Internet connection is a must too. 
  • Run your Software Updater app, and install all the updates available. 
how to upgrade to ubuntu 19.04 from ubuntu 18.10

  • Post the update, you should be prompted with an "Ubuntu 19.04 is available" window. It will guide you through the required steps without much hassle. 
  • If not, fire up Software & Updates app and check for updates. 
  • If both these didn't work in your case, there's always the commandline option to make the force upgarde. Open Terminal app (keyboard shortcut: CTRL+ALT+T), and run the command below.
sudo do-release-upgrade -d
  • Type the password when prompted. Don't let the simplicity of the command fool you, this is just the start of a long and complicated process. do-release command will check for available upgrades and then give you an estimated time and bandwidth required to complete the process. 
  • Read the instructions carefully and proceed. The process only takes about an hour or less for me. It entirely depends on your internet speed and system resources.
So, how did it go? Was the upgrade process smooth as it should be? And what do you think about new Ubuntu 19.04 "disco dingo"? Let us know in the comments.

15 Things I Did Post Ubuntu 19.04 Installation [Tech Drive-in]

Ubuntu 19.04, codenamed "Disco Dingo", has been released (and upgrading is easier than you think). I've been on Ubuntu 19.04 since its first Alpha, and this has been a rock solid release as far I'm concerned. Changes in Ubuntu 19.04 are more evolutionary though, but availability of the latest Linux Kernel version 5.0 is significant.

ubuntu 19.04 things to do after install

Unity is long gone and Ubuntu 19.04 is indistinguishably GNOME 3.x now, which is not necessarily a bad thing. Yes, I know, there are many who still swear by the simplicity of Unity desktop. But I'm an outlier here, I liked both Unity and GNOME 3.x even in their very early avatars. When I wrote this review of GNOME Shell desktop almost 8 years ago, I knew it was destined for greatness. Ubuntu 19.04 "Disco Dingo" runs GNOME 3.32.0.


We'll discuss more about GNOME 3.x and Ubuntu 19.04 in the official review. Let's get down to brass tacks. A step-by-step guide into things I did after installing Ubuntu 19.04 "Disco Dingo". 

1. Make sure your system is up-to-date

Do a full system update. Fire up your Software Updater and check for updates.

how to update ubuntu 19.04

OR
via Terminal, this is my preferred way to update Ubuntu. Just one command.

sudo apt update && sudo apt dist-upgrade

Enter password when prompted and let the system do the rest.

2. Install GNOME Tweaks

GNOME Tweaks is non-negotiable.

things to do after installing ubuntu 19.04

GNOME Tweaks is an app the lets you tweak little things in GNOME based OSes that are otherwise hidden behind menus. If you are on Ubuntu 19.04, Tweaks is a must. Honestly, I don't remember if it was installed as a default. But here you install it anyway, Apt-URL will prompt you if the app already exists.

Search for Gnome Tweaks in Ubuntu Software Center. OR simply CLICK HERE to go straight to the app in Software Center. OR even better, copy-paste this command in Terminal (keyboard shortcut: CTRL+ALT+T).

sudo apt install gnome-tweaks

3. Enable MP3/MP4/AVI Playback, Adobe Flash etc.

You do have an option to install most of the 'restricted-extras' while installing the OS itself now, but if you are not-sure you've ticked all the right boxes, just run the following command in Terminal.

sudo apt install ubuntu-restricted-extras

OR

You can install it straight from the Ubuntu Software Center by CLICKING HERE.

4. Display Date/Battery Percentage on Top Panel  

The screenshot, I hope, is self explanatory.

things to do after installing ubuntu 19.04

If you have GNOME Tweaks installed, this is easily done. Open GNOME tweaks, goto 'Top Bar' sidemenu and enable/disable what you need.

5. Enable 'Click to Minimize' on Ubuntu Dock

Honestly, I don't have a clue why this is disabled by default. You intuitively expect the apps shortcuts on Ubuntu dock to 'minimize' when you click on it (at least I do).

In fact, the feature is already there, all you need to do is to switch it ON. Do this is Terminal.

gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'

That's it. Now if you didn't find the 'click to minimize' feature useful, you can always revert Dock settings back to its original state, by copy-pasting the following command in Terminal app.

gsettings reset org.gnome.shell.extensions.dash-to-dock click-action

6. Pin/Unpin Apps from Launcher

There are a bunch of apps that are pinned to your Ubuntu launcher by default.

things to do after ubuntu 19.04
 
For example, I almost never use the 'Help' app or the 'Amazon' shortcut preloaded on launcher. But I would prefer a shortcut to Terminal app instead. Right-click on your preferred app on the launcher, and add-to/remove-from favorites as you please.

7. Enable GNOME Shell Exetensions Support

Extensions are an integral part of GNOME desktop.

It's a real shame that one has to go through all these for such a basic yet important feature. From the default Firefox browser, when you visit GNOME Extensions page, you will notice the warning message on top describing the unavailability of Extensions support.
Now for the second part, you need to install the host connector on Ubuntu.
sudo apt install chrome-gnome-shell
  • Done. Don't mind the "chrome" in 'chrome-gnome-shell', it works with all major browsers, provided you've the correct browser add-on installed. 
  • You can now visit GNOME Extensions page and install extensions as you wish with ease. (if it didn't work immediately, a system restart will clear things up). 
Extensions are such an integral part of GNOME Desktop experience, can't understand why this is not a system default in Ubuntu 19.04. Hope future releases of Ubuntu will have this figured out.

8. My Favourite 5 GNOME Shell Extensions for Ubuntu 19.04


9. Remove Trash Icon from Desktop

Annoyed by the permanent presence of Home and Trash icons in the desktop? You are not alone. Luckily, there's an extension for that!
Done. Now, access the settings and enable/disable icons as you please. 


Extension settings can be accessed directly from the extension home page (notice the small wrench icon near the ON/OFF toggle). OR you can use the Extensions addon like in the screenshot above.

10. Enable/Disable Two Finger Scrolling

As you must've noticed, two-finger scrolling is a system default for sometime now. 

things to do after installing ubuntu cosmic
 
One of my laptops act strangely when two-finger scrolling is on. You can easily disable two-finger scrolling and enable old school edge-scrolling in 'Settings'.  Settings > Mouse and Touchpad

Quicktip: You can go straight to submenus by simply searching for it in GNOME's universal search bar.

ubuntu 19.04 disco

Take for example the screenshot above, where I triggered the GNOME menu by hitting Super(Windows) key, and simply searched for 'mouse' settings. The first result will take me directly to the 'Settings' submenu for 'Mouse and Touchpad' that we saw earlier. Easy right? More examples will follow.

11. Nightlight Mode ON

When you're glued to your laptop/PC screen for a large amount of time everyday, it is advisable that you enable the automatic nightlight mode for the sake of your eyes. Be it the laptop or my phone, this has become an essential feature. The sight of a LED display without nightlight ON during lowlight conditions immediately gives me a headache these days. Easily one of my favourite in-built features on GNOME.


Settings > Devices > Display > Night Light ON/OFF

things to do after installing ubuntu 19.04

OR as before, Hit superkey > search for 'night light'. It will take you straight to the submenu under Devices > Display. Guess you wouldn't need anymore examples on that.

things to do after installing ubuntu 19.04

12. Privacy on Ubuntu 19.04

Guess I don't need to lecture you on the importance of privacy in the post-PRISM era.

ubuntu 19.04 privacy

Ubuntu remembers your usage & history to recommend you frequently used apps and such. And this is never shared over the network. But if you're not comfortable with this, you can always disable and delete your usage history on Ubuntu. Settings > Privacy > Usage & History 

13. Perhaps a New Look & Feel?

As you might have noticed, I'm not using the default Ubuntu theme here.

themes ubuntu 19.04

Right now I'm using System 76's Pop OS GTK theme and icon sets. They look pretty neat I think. Just three commands to install it in your Ubuntu 19.04.

sudo add-apt-repository ppa:system76/pop
sudo apt-get update 
sudo apt install pop-icon-theme pop-gtk-theme pop-gnome-shell-theme 
sudo apt install pop-wallpapers 

Execute last command if you want Pop OS wallpapers as well. To enable the newly installed theme and icon sets, launch GNOME Tweaks > Appearance (see screenshot). I will be making separate posts on themes, icon sets and GNOME shell extensions. So stay subscribed. 

14. Disable Error Reporting

If you find the "application closed unexpectedly" popups annoying, and would like to disable error reporting altogether, this is what you need to do.


Settings > Privacy > Problem Reporting and switch it off. 

15. Liberate vertical space on Firefox by disabling Title Bar

This is not an Ubuntu specific tweak.


Firefox > Settings > Customize. Notice the "Title Bar" at the bottom left? Untick to disable.

Follow us on Facebook, and Twitter.

Ubuntu 19.04 Gets Newer and Better Wallpapers [Tech Drive-in]

A "Disco Dingo" themed wallpaper was already there. But the latest update bring a bunch of new wallpapers as system defaults on Ubuntu 19.04.

ubuntu 19.04 wallpaper

Pretty right? Here's the older one for comparison.

ubuntu 19.04 updates

The newer wallpaper is definitely cleaner, more professional looking with better colors. I won't bother tinkering with wallpapers anymore, the new default on Ubuntu 19.04 is just perfect.

ubuntu 19.04 wallpapers

Too funky for my taste. But I'm sure there will be many who will prefer this darker, edgier, wallpaper over the others. As we said earlier, the new "disco dingo" mascot calls for infinite wallpaper variations.


Apart from theme and artwork updates, Ubuntu 19.04 has the latest Linux Kernel version 5.0 (5.0.0.8 to be precise). You can read more about Ubuntu 19.04 features and updates here.

Ubuntu 19.04 hit beta a few days ago. Though it is a pretty stable release already for a beta, I'd recommend to wait for another 15 days or so until the final release. If all you care are the wallpapers, you can download the new Ubuntu 19.04 wallpapers here. It's a DEB file, just do a double click post download.

LinuxBoot: A Linux Foundation Project to replace UEFI Components [Tech Drive-in]

UEFI has a pretty bad reputation among many in the Linux community. UEFI unnecessarily complicated Linux installation and distro-hopping in Windows pre-installed machines, for example. Linux Boot project by Linux Foundation aims to replace some firmware functionality like the UEFI DXE phase with Linux components.

What is UEFI?
UEFI is a standard or a specification that replaced legacy BIOS firmware, which was the industry standard for decades. Essentially, UEFI defines the software components between operating system and platform firmware.


UEFI boot has three phases: SEC, PEI and DXE. Driver eXecution Environment or DXE Phase in short: this is where UEFI system loads drivers for configured devices. LinuxBoot will replaces specific firmware functionality like the UEFI DXE phase with a Linux kernel and runtime.

LinuxBoot and the Future of System Startup
"Firmware has always had a simple purpose: to boot the OS. Achieving that has become much more difficult due to increasing complexity of both hardware and deployment. Firmware often must set up many components in the system, interface with more varieties of boot media, including high-speed storage and networking interfaces, and support advanced protocols and security features."  writes Linux Foundation.

linuxboot uefi replacement

LinuxBoot will replace this slow and often error-prone code with a Linux Kernel. This alone should significantly improve system startup performance.

On top of that, LinuxBoot intends to achieve increased boot reliability and boot-time performance by removing unnecessary code and by using reliable Linux drivers instead of lightly tested firmware drivers. LinuxBoot claims that these improvements could potentially help make the system startup process as much as 20 times faster.

In fact, this "Linux to boot Linux" technique has been fairly common place in supercomputers, consumer electronics, and military applications, for decades. LinuxBoot looks to take this proven technique and improve on it so that it can be deployed and used more widely by individual users and companies.

Current Status
LinuxBoot is not as obscure or far-fetched as, say, lowRISC (open-source, Linux capable, SoC) or even OpenPilot. At FOSDEM 2019 summit, Facebook engineers revealed that their company is actively integrating and finetuning LinuxBoot to their needs for freeing hardware down to the lowest levels.


Facebook and Google are deeply involved in LinuxBoot project. Being large data companies, where even small improvements in system startup speed and reliability can bring major advantages, their involvement is not a surprise. To put this in perspective, a large data center run by Google or Facebook can have tens of thousands of servers. Other companies involved include Horizon Computing, Two Sigma and 9elements Cyber Security.

Look up Uber Time, Price Estimates on Terminal with Uber CLI [Tech Drive-in]

The worldwide phenomenon that is Uber needs no introduction. Uber is an immensely popular ride sharing, ride hailing, company that is valued in billions. Uber is so disruptive and controversial that many cities and even countries are putting up barriers to protect the interests of local taxi drivers.

Enough about Uber as a company. To those among you who regularly use Uber app for booking a cab, Uber CLI could be a useful companion.


Uber CLI can be a great tool for the easily distracted. This unique command line application allows you to look up Uber cab's time and price estimates without ever taking your eyes off the laptop screen.

Install Uber-CLI using NPM

You need to have NPM first to install Uber-CLI on Ubuntu. npm, short for Node.js package manager, is a package manager for the JavaScript programming language. It is the default package manager for the JavaScript runtime environment Node.js. npm has a command line based client and its own repository of packages.

This is how to install npm on Ubuntu 19.04, and Ubuntu 18.10. And thereafter, using npm, install Uber-CLI. Fire up the Terminal and run the following.

sudo apt update
sudo apt install nodejs npm
npm install uber-cli -g

And you're done. Uber CLI is a command line based application, here are a few examples of how it works in Terminal. Also, since Uber is not available where I live, I couldn't vouch for its accuracy.


Uber-CLI has just two use cases.
uber time 'pickup address here'
uber price -s 'start address' -e 'end address'
Easy right? I did some testing with places and addresses I'm familiar with, where Uber cabs are fairly common. And I found the results to be fairly accurate. Do test and leave feedback. Uber CLI github page for more info.

UBports Installer for Ubuntu Touch is just too good! [Tech Drive-in]

Even as someone who bought into the Ubuntu Touch hype very early, I was not expecting much from UBports to be honest. But to my pleasent surprise, UBports Installer turned my 4 year old BQ Aquaris E4.5 Ubuntu Edition hardware into a slick, clean, and usable phone again.



ubuntu phone 16.04
UBports Installer and Ubuntu Touch
As many of you know already, Ubuntu Touch was Canonical's failed attempt to deliver a competent mobile operating system based on its desktop version. The first Ubuntu Touch installed smartphone was released in 2015 by BQ, a Spanish smartphone manufacturer. And in April 2016, the world's first Ubuntu Touch based tablet, the BQ Aquaris M10 Ubuntu Edition, was released.

Though initial response was  quite promising, Ubuntu Touch failed to make a significant enough splash in the smartphone space. In fact, Ubuntu Touch was not alone, many other mobile OS projects like Firefox OS or even Samsung owned Tizen OS for that matter failed to capture a sizable market-share from Android/iOS duopoly.

To the disappointment of Ubuntu enthusiasts, Mark Shuttleworth announced the termination of Ubuntu Touch development in April, 2017.


Rise of UBports and revival of Ubuntu Touch Project
ubuntu touch 16.04For all its inadequacies, Ubuntu Touch was one unique OS. It looked and felt different from most other mobile operating systems. And Ubuntu Touch enthusiasts was not ready to give up on it so easily. Enter UBports.

UBports turned Ubuntu Touch into a community-driven project. Passionate people from around the world now contribute to the development of Ubuntu Touch. In August 2018, UBPorts released its OTA-4, upgrading the Ubuntu Touch's base from the Canonical's starting Ubuntu 15.04 (Vivid Vervet) to the nearest, current long-term support version Ubuntu 16.04 LTS.

They actively test the OS on a number of legacy smartphone hardware and help people install Ubuntu Touch on their smartphones using an incredibly capable, cross-platform, installer.

Ubuntu Touch Installer on Ubuntu 19.04
Though I knew about UBports project before, I was never motivated enough to try the new OS on my Aquaris E4.5, until yesterday. By sheer stroke of luck, I stumbled upon UBports Installer in Ubuntu Software Center. I was curious to find out if it really worked as it claimed on the page.

ubuntu touch installer on ubuntu 19.04

I fired up the app on my Ubuntu 19.04 and plugged in my Aquaris E4.5. Voila! the installer detected my phone in a jiffy. Since there wasn't much data on my BQ, I proceeded with Ubuntu Touch installation.

ubports ubuntu touch installer

The instructions were pretty straight forward and it took probably 15 minutes to download, restart, and install, 16.04 LTS based Ubuntu Touch on my 4 year old hardware.

ubuntu touch ubports

In my experience, even flashing an Android was never this easy! My Ubuntu phone is usable again without all the unnecessary bloat that made it clunky. This post is a tribute to the UBports community for the amazing work they've been doing with Ubuntu Touch. Here's also a list of smartphone hardware that can run Ubuntu Touch.

Retro Terminal that Emulates Old CRT Display (Ubuntu 18.10, 18.04 PPA) [Tech Drive-in]

We've featured cool-retro-term before. It is a wonderful little terminal emulator app on Ubuntu (and Linux) that adorns this cool retro look of the old CRT displays.

Let the pictures speak for themselves.

retro terminal ubuntu ppa

Pretty cool right? Not only does it look cool, it functions just like a normal Terminal app. You don't lose out on any features normally associated with a regular Terminal emulator. cool-retro-term comes with a bunch of themes and customisations that takes its retro cool appeal a few notches higher.

cool-old-term retro terminal ubuntu linux

Enough now, let's find out how you install this retro looking Terminal emulator on Ubuntu 18.04 LTS, and Ubuntu 18.10. Fire up your Terminal app, and run these commands one after the other.

sudo add-apt-repository ppa:vantuz/cool-retro-term
sudo apt update
sudo apt install cool-retro-term

Done. The above PPA supports Ubuntu Artful, Bionic and Cosmic releases (Ubuntu 17.10, 18.04 LTS, 18.10). cool-retro-term is now installed and ready to go.


Since I don't have Artful or Bionic installations in any of my computers, I couldn't test the PPA on those releases. Do let me know if you faced any issues while installing the app.

And as some of you might have noticed, I'm running cool-retro-term from an AppImage. This is because I'm on Ubuntu 19.04 "disco dingo", and obviously the app doesn't support an unreleased OS (well, duh!).

retro terminal ubuntu ppa

This is how it looks on fullscreen mode. If you are a non-Ubuntu user, you can find various download options here. If you are on Fedora or distros based on it, cool-retro-term is available in the official repositories.

Google's Stadia Cloud Gaming Service, Powered by Linux [Tech Drive-in]

Unless you live under a rock, you must've been inundated with nonstop news about Google's high-octane launch ceremony yesterday where they unveiled the much hyped game streaming platform called Stadia.

Stadia, or Project Stream as it was earlier called, is a cloud gaming service where the games themselves are hosted on Google's servers, while the visual feedback from the game is streamed to the player's device through Google Chrome. If this technology catches on, and if it works just as good as showed in the demos, Stadia could be what the future of gaming might look like.

Stadia, Powered by Linux

It is a fairly common knowledge that Google data centers use Linux rather extensively. So it is not really surprising that Google would use Linux to power its cloud based Stadia gaming service. 

google stadia runs on linux

Stadia's architecture is built on Google data center network which has extensive presence across the planet. With Google Stadia, Google is offering a virtual platform where processing resources can be scaled up to match your gaming needs without the end user ever spending a dime more on hardware.


And since Google data centers mostly runs on Linux, the games on Stadia will run on Linux too, through the Vulkan API. This is great news for gaming on Linux. Even if Stadia doesn't directly result in more games on Linux, it could potentially make gaming a platform agnostic cloud based service, like Netflix.

With Stadia, "the data center is your platform," claims Majd Bakar, head of engineering at Stadia. Stadia is not constrained by limitations of traditional console systems, he adds. Stadia is a "truly flexible, scalable, and modern platform" that takes into account the future requirements of the gaming ecosystem. When launched later this year, Stadia will be able to stream at 4K HDR and 60fps with surround sound.


Watch the full presentation here. Tell us what you think about Stadia in the comments.

Ubuntu 19.04 Updates - 7 Things to Know [Tech Drive-in]

Ubuntu 19.04 is scheduled to arrive in another 30 days has been released. I've been using it for the past week or so, and even as a pre-beta, the OS is pretty stable and not buggy at all. Here are a bunch of things you should know about the yet to be officially released Ubuntu 19.04.

what's new in ubuntu 19.04

1. Codename: "Disco Dingo"

How about that! As most of you know already, Canonical names its semiannual Ubuntu releases using an adjective and an animal with the same first letter (Intrepid Ibex, Feisty Fawn, or Maverick Meerkat, for example, were some of my favourites). And the upcoming Ubuntu 19.04 is codenamed "Disco Dingo", has to be one of the coolest codenames ever for an OS.


2. Ubuntu 19.04 Theme Updates

A new cleaner, crisper looking Ubuntu is coming your way. Can you notice the subtle changes to the default Ubuntu theme in screenshot below? Like the new deep-black top panel and launcher? Very tastefully done.

what's new in ubuntu 19.04

To be sure, this is now looking more and more like vanilla GNOME and less like Unity, which is not a bad thing.

ubuntu 19.04 updates

There are changes to the icons too. That hideous blue Trash icon is gone. Others include a new Update Manager icon, Ubuntu Software Center icon and Settings Icon.

3. Ubuntu 19.04 Official Mascot

GIFs speaks louder that words. Meet the official "Disco Dingo" mascot.



Pretty awesome, right? "Disco Dingo" mascot calls for infinite wallpaper variations.

4. The New Default Wallpaper

The new "Disco Dingo" themed wallpaper is so sweet: very Ubuntu-ish yet unique. A gray scale version of the same wallpaper is a system default too.

ubuntu 19.04 disco dingo features

UPDATE: There's a entire suit of newer and better wallpapers on Ubuntu 19.04!

5. Linux Kernel 5.0 Support

Ubuntu 19.04 "Disco Dingo" will officially support the recently released Linux Kernel version 5.0. Among other things, Linux Kernel 5.0 comes with AMD FreeSync display support which is awesome news to users of high-end AMD Radeon graphics cards.

ubuntu 19.04 features

Also important to note is the added support for Adiantum Data Encryption and Raspberry Pi touchscreens. Apart from that, Kernel 5.0 has regular CPU performance improvements and improved hardware support.

6. Livepatch is ON

Ubuntu 19.04's 'Software and Updates' app has a new default tab called Livepatch. This new feature should ideally help you to apply critical kernel patches without rebooting.

Livepatch may not mean much to a normal user who regularly powerdowns his or her computer, but can be very useful for enterprise users where any downtime is simply not acceptable.

ubuntu 19.04 updates

Canonical introduced this feature in Ubuntu 18.04 LTS, but was later removed when Ubuntu 18.10 was released. The Livepatch feature is disabled on my Ubuntu 19.04 installation though, with a "Livepatch is not available for this system" warning. Not exactly sure what that means. Will update.

7. Ubuntu 19.04 Release Schedule

The beta freeze is scheduled to happen on March 28th and final release on April 18th.

ubuntu 19.04 what's new

Normally, post the beta release, it is a safe to install Ubuntu 19.04 for normal everyday use in my opinion, but ONLY if you are inclined to give it a spin before everyone else of course. I'd never recommend a pre-release OS on production machines. Ubuntu 19.04 Daily Build Download.


My biggest disappointment though is the supposed Ubuntu Software Center revamp which is now confirmed to not make it to this release. Subscribe us on Twitter and Facebook for more Ubuntu 19.04 release updates.

ubuntu 19.04 disco dingo

Recommended read: Top things to do after installing Ubuntu 19.04

Purism: A Linux OS is talking Convergence again [Tech Drive-in]

The hype around "convergence" just won't die it seems. We have heard it from Ubuntu a lot, KDE, even from Google and Apple in fact. But the dream of true convergence, a uniform OS experience across platforms, never really materialised. Even behemoths like Apple and Googled failed to pull it off with their Android/iOS duopoly. Purism's Debian based PureOS wants to change all that for good.

pure os linux

Purism, PureOS, and the future of Convergence

Purism, a computer technology company based out of California, shot to fame for its Librem series of privacy and security focused laptops and smartphones. Purism raised over half a million dollars through a Crowd Supply crowdfunding campaign for its laptop hardware back in 2015. And unlike many crowdfunding megahits which later turned out to be duds, Purism delivered on its promises big time.


Later in 2017, Purism surprised everyone again with their successful crowdfunding campaign for its Linux based opensource smartphone, dubbed Librem 5. The campaign raised over $2.6 million and surpassed its 1.5 million crowdfunding goal in just in two weeks. Purism's Librem 5 smartphones will start shipping late 2019.

Librem, which loosely refers to free and opensource software, was the brand name chosen by Purism for its laptops/smartphones. One of the biggest USPs of Purism devices is the hardware kill switches that it comes loaded with, which physically disconnects phone's camera, WiFi, Bluetooth, and mobile broadband modem.

Meet PureOS, Purism's Debian Based Linux OS

PureOS is a free and opensource, Debian based Linux distribution which runs on all Librem hardware including its smartphones. PureOS is endorsed by Free Software Foundation. 

purism os linux

The term convergence in computer speak, refers to applications that can work seamlessly across platforms, and bring a consistent look and feel and similar functionality on your smartphone and your computer. 
"Purism is beating the duopoly to that dream, with PureOS: we are now announcing that Purism’s PureOS is convergent, and has laid the foundation for all future applications to run on both the Librem 5 phone and Librem laptops, from the same PureOS release", announced Jeremiah Foster, the PureOS director at Purism (by duopoly, he was referring to Android/iOS platforms that dominate smartphone OS ecosystem).
Ideally, convergence should be able to help app developers and users all at the same time. App developers should be able to write their app once, testing it once and running it everywhere. And users should be able to seamlessly use, connect and sync apps across devices and platforms.

Easier said than done though. As Jeremiah Foster himself explains:
"it turns out that this is really hard to do unless you have complete control of software source code and access to hardware itself. Even then, there is a catch; you need to compile software for both the phone’s CPU and the laptop CPU which are usually different architectures. This is a complex process that often reveals assumptions made in software development but it shows that to build a truly convergent device you need to design for convergence from the beginning."

How PureOS is achieving convergence?

PureOS have had a distinct advantage when it comes to convergence. Purism is a hardware maker that also designs its platforms and software. From its inception, Purism has been working on a "universal operating system" that can run on different CPU architectures.

librem opensource phone

"By basing PureOS on a solid, foundational operating system – one that has been solving this performance and run-everywhere problem for years – means there is a large set of packaged software that 'just works' on many different types of CPUs."

The second big factor is "adaptive design", software apps that can adapt for desktop or mobile easily, just like a modern website with responsive deisgn.


"Purism is hard at work on creating adaptive GNOME apps – and the community is joining this effort as well – apps that look great, and work great, both on a phone and on a laptop".

Purism has also developed an adaptive presentation library for GTK+ and GNOME, called libhandy, which the third party app developers can use to contribute to Purism's convergence ecosystem. Still under active development, libhandy is already packaged into PureOS and Debian.

Komorebi Wallpapers display Live Time & Date, Stunning Parallax Effect on Ubuntu [Tech Drive-in]

Live wallpapers are not a new thing. In fact we have had a lot of live wallpapers to choose from on Linux 10 years ago. Today? Not so much. In fact, be it GNOME or KDE, most desktops today are far less customizable than it used to be. Komorebi wallpaper manager for Ubuntu is kind of a way back machine in that sense.

ubuntu live wallpaper

Install Gorgeous Live Wallpapers in Ubuntu 18.10/18.04 using Komorebi

Komorebi Wallpaper Manager comes with a pretty neat collection of live wallpapers and even video wallpapers. The package also contains a simple tool to create your own live wallpapers.


Komorebi comes packaged in a convenient 64-bit DEB package, making it super easy to install in Ubuntu and most Debian based distros (latest version dropped 32-bit support though).  
ubuntu 18.10 live wallpaper

That's it! Komorebi is installed and ready to go! Now launch Komorebi from app launcher.

ubuntu komorebi live wallpaper

And finally, to uninstall Komorebi and revert all the changes you made, do this in Terminal (CTRL+ALT+T).

sudo apt remove komorebi

Komorebi works great on Ubuntu 18.10, and 18.04 LTS. A few more screenshots.

komorebi live wallpaper ubuntu

As you can see, live wallpapers obviously consume more resources than a regular wallpaper, especially when you switch on Komorebi's fancy video wallpapers. But it is definitely not a resource hog as I feared it would be.

ubuntu wallpaper live time and date

Like what you see here? Go ahead and give Komorebi Wallpaper Manager a spin. Does it turn out to be not as resource-friendly in your PC? Let us know your opinion in the comments. 

ubuntu live wallpapers

A video wallpaper example. To see them in action, watch this demo.

Snap Install Mario Platformer on Ubuntu 18.10, Ubuntu 18.04 LTS [Tech Drive-in]

Nintendo's Mario needs no introduction. This game defined our childhoods. Now you can install and have fun with an unofficial version of the famed Mario platformer in Ubuntu 18.10 via this Snap package.

install Mario on Ubuntu

Play Nintendo's Mario Unofficially on Ubuntu 18.10

"Mari0 is a Mario + Portal platformer game." It is not an official release and hence the slight name change (Mari0 instead of Mario). Mari0 is still in testing, and might not work as intended. It doesn't work fullscreen for example, but everything else seems to be working great in my PC.

But please be aware that this app is still in testing, and a lot of things can go wrong. Mari0 also comes with joystick support. Here's how you install unofficial Mari0 snap package. Do this in Terminal (CTRL+ALT+T)

sudo snap install mari0

To enable joystick support:

sudo snap connect mari0:joystick

nintendo mario ubuntu

Please find time to provide valuable feedback to the developer post testing, especially if something went wrong. You can also leave your feedback in the comments below.

Florida based Startup Builds Ubuntu Powered Aerial Robotics [Tech Drive-in]

Apellix is a Florida based startup that specialises in aerial robotics. They intend to create safer work environments by replacing workers with its task-specific drones to complete high-risk jobs at dangerous/elevated work sites.

ubuntu robotics

Robotics with an Ubuntu Twist

Ubuntu is expanding its reach into robotics and IoT in a big way. A few years ago at the TechCrunch Disrupt event, UAVIA unveiled a new generation of its one hundred percent remotely operable drones (an industry first, they claimed), which were built with Ubuntu under the hood. Then there were other like Erle Robotics (recently renamed to Acutronic Robotics) which made big strides in drone technology using Ubuntu at its core.


Apellix is the only aerial robotics company with drones "capable of making contact with structures through fully computer-controlled flight", claims Robert Dahlstrom, Founder and CEO of Apellix.

"At height, a human pilot cannot accurately gauge distance. At 45m off the ground, they can’t tell if they are 8cm or 80cm away from the structure. With our solutions, an engineer simply positions the drone near the inspection site, then the on-board computer takes over and automates the delicate docking process." He adds.


Apellix considered many popular Linux distributions before zeroing in on Ubuntu for its stability, reliability, and large developer ecosystem. Ubuntu's versatility also enabled Apellix to use the same underlying OS platform and software packages across development and production.

The team is currently developing on Ubuntu Server with the intent to migrate to Ubuntu Core. The company is also making extensive use of Ubuntu Server, both on-board its robotic systems and its cloud operations, according to a case study by Ubuntu's parent company, Canonical Foundation. 

apellix ubuntu drones

"With our aircraft, an error of 2.5 cm could be the difference between a successful flight and a crash," comments Dahlstrom. "Software is core to avoiding those errors and allowing us to do what we do - so we knew that placing the right OS at the heart of our solutions was essential." 

Openpilot: An Opensource Alternative to Tesla Autopilot, GM Super Cruise [Tech Drive-in]

Openpilot is an opensource driving agent which at the moment can perform industry-standard functions such as Adaptive Cruise Control and Lane Keeping Assist System for a select few auto manufacturers.


opensource autopilot system

Meet Project Openpilot

Opensource isn't a misnomer in the world of autonomous cars. Even as far back as in 2013, Ubuntu was spotted in Mercedes-Benz driverless cars, and it is also a well-known fact that Google is using a 'lightly customized Ubuntu' at the core of its push towards building fully autonomous cars. 

Openpilot though is unique in its own way. It's an opensource driving agent that already works (as is claimed) in a number of models from manufacturers such as Toyota, Kia, Honda, Chevrolet, Hyundai, Jeep, etc.


Above image: An Openpilot user getting a distracted alert. Apart from Adaptive Cruise Control (ACC) and Lane Keeping Assist System functions, Openpilot developers claims that their technology currently is "about on par with Tesla Autopilot and GM Super Cruise, and better than all other manufacturers."

If Tesla's Autopilot was iOS, Openpilot developers would like their product to become the "Android for cars", the ubiquitous software of choice when autonomous systems on cars goes universal.



The Openpilot-endorsed, officially supported list of cars keeps growing. It now includes some 40 odd models from manufacturers ranging from Toyota to Hyundai. And they are actively testing Openpilot on newer cars from VW, Subaru etc. according to their Twitter feed.

Even a lower variant of Tesla Model S which came without Tesla Autopilot system was upgraded with comma.ai's Openpilot solution which then mimicked a number of features from Tesla Autopilot, including automatic steering in highways according to this article. (comma.ai is the startup behind Openpilot)

Related read: Udacity's attempts to build a fully opensource self-driving car, and Linux Foundation's Automotive Grade Linux (AGL) infotainment system project which Toyota intends to use in its future cars.

Oranchelo - The icon theme to beat on Ubuntu 18.10 [Tech Drive-in]

OK, that might be an overstatement. But Oranchelo is good, really good.


Oranchelo Icons Theme for Ubuntu 18.10

Oranchelo is a flat-design icon theme originally designed for XFCE4 desktop. Though it works great on GNOME as well. I especially like the distinct take on Firefox and Chromium icons, as you can see in the screenshot.



Here's how you install Oranchelo icons theme on Ubuntu 18.10 using Oranchelo PPA. Just copy-paste the following three commands to Terminal (CTRL+ALT+T).

sudo add-apt-repository ppa:oranchelo/oranchelo-icon-theme
sudo apt update
sudo apt install oranchelo-icon-theme

Now run GNOME Tweaks, Appearance > Icons > Oranchelo.


Meet the artist behind Oranchelo icons theme at his deviantart page. So, how do you like the new icons? Let us know your opinion in the comments below.


11 Things I did After Installing Ubuntu 18.10 Cosmic Cuttlefish [Tech Drive-in]

Have been using "Cosmic Cuttlefish" since its first beta. It is perhaps one of the most visually pleasing Ubuntu releases ever. But more on that later. Now let's discuss what can be done to improve the overall user-experience by diving deep into the nitty gritties of Canonical's brand new flagship OS.

1. Enable MP3/MP4/AVI Playback, Adobe Flash etc.

This has been perhaps the standard 'first-thing-to-do' ever since the Ubuntu age dawned on us. You do have an option to install most of the 'restricted-extras' while installing the OS itself now, but if you are not-sure you've ticked all the right boxes, just run the following command in Terminal.

sudo apt install ubuntu-restricted-extras

OR

You can install it straight from the Ubuntu Software Center by CLICKING HERE.

2. Get GNOME Tweaks

GNOME Tweaks is non-negotiable.

things to do after installing ubuntu 18.10

GNOME Tweaks is an app the lets you tweak little things in GNOME based OSes that are otherwise hidden behind menus. If you are on Ubuntu 18.10, Tweaks is a must. Honestly, I don't remember if it was installed as a default. But here you install it anyway, Apt-URL will prompt you if the app already exists.


Search for Gnome Tweaks in Ubuntu Software Center. OR simply CLICK HERE to go straight to the app in Software Center. OR even better, copy-paste this command in Terminal (keyboard shortcut: CTRL+ALT+T).

sudo apt install gnome-tweaks

3. Displaying Date/Battery Percentage on Top Panel  

The screenshot, I hope, is self explanatory.

things to do after installing ubuntu 18.10

If you have GNOME Tweaks installed, this is easily done. Open GNOME tweaks, goto 'Top Bar' sidemenu and enable/disable what you need.

4. Enable 'Click to Minimize' on Ubuntu Dock

Honestly, I don't have a clue why this is disabled by default. You intuitively expect the apps shortcuts on Ubuntu dock to 'minimize' when you click on it (at least I do).

In fact, the feature is already there, all you need to do is to switch it ON. Do this is Terminal.

gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'

That's it. Now if you didn't find the 'click to minimize' feature useful, you can always revert Dock settings back to its original state, by copy-pasting the following command in Terminal app.

gsettings reset org.gnome.shell.extensions.dash-to-dock click-action

5. Pin/Unpin Useful Stuff from Launcher

There are a bunch of apps that are pinned to your Ubuntu launcher by default.

things to do after ubuntu 18.10
 
For example, I almost never use the 'Help' app or the 'Amazon' shortcut preloaded on launcher. But I would prefer a shortcut to Terminal app instead. Right-click on your preferred app on the launcher, and add-to/remove-from favorites as you please.

6. Enable/Disable Two Finger Scrolling

As you must've noticed, two-finger scrolling is a system default now. 

things to do after installing ubuntu cosmic
 
One of my laptops act strangely when two-finger scrolling is on. You can easily disable two-finger scrolling and enable old school edge-scrolling in 'Settings'.  Settings > Mouse and Touchpad

Quicktip: You can go straight to submenus by simply searching for it in GNOME's universal search bar.

ubuntu 18.10 cosmic

Take for example the screenshot above, where I triggered the GNOME menu by hitting Super(Windows) key, and simply searched for 'mouse' settings. The first result will take me directly to the 'Settings' submenu for 'Mouse and Touchpad' that we saw earlier. Easy right? More examples will follow.

7. Nightlight Mode ON

When you're glued to your laptop/PC screen for a large amount of time everyday, it is advisable that you enable the automatic nightlight mode for the sake of your eyes. Be it the laptop or my phone, this has become an essential feature. The sight of a LED display without nightlight ON during lowlight conditions immediately gives me a headache these days. Easily one of my favourite in-built features on GNOME.


Settings > Devices > Display > Night Light ON/OFF

things to do after installing ubuntu 18.10

OR as before, Hit superkey > search for 'night light'. It will take you straight to the submenu under Devices > Display. Guess you wouldn't need anymore examples on that.

things to do after installing ubuntu 18.10

8. Safe Eyes App for Ubuntu

A popup that will fill the entire screen and forces you to take your eyes off it.

apps for ubuntu 18.10

Apart from enabling the nighlight mode, Safe Eyes is another app I strongly recommend to those who stare at their laptops for long periods of time. This nifty little app forces you to take your eyes off the computer screen and do some standard eye-exercises at regular intervals (which you can change).

things to do after installing ubuntu 18.10

Installation is pretty straight forward. Just these 3 commands on your Terminal.

sudo add-apt-repository ppa:slgobinath/safeeyes
sudo apt update 
sudo apt install safeeyes 

9. Privacy on Ubuntu 18.10

Guess I don't need to lecture you on the importance of privacy in the post-PRISM era.

ubuntu 18.10 privacy

Ubuntu remembers your usage & history to recommend you frequently used apps and such. And this is never shared over the network. But if you're not comfortable with this, you can always disable and delete your usage history on Ubuntu. Settings > Privacy > Usage & History 

10. Perhaps a New Look & Feel?

As you might have noticed, I'm not using the default Ubuntu theme here.

themes ubuntu 18.10

Right now I'm using System 76's Pop OS GTK theme and icon sets. They look pretty neat I think. Just three commands to install it in your Ubuntu 18.10.

sudo add-apt-repository ppa:system76/pop
sudo apt-get update 
sudo apt install pop-icon-theme pop-gtk-theme pop-gnome-shell-theme 
sudo apt install pop-wallpapers 

Execute last command if you want Pop OS wallpapers as well. To enable the newly installed theme and icon sets, launch GNOME Tweaks > Appearance (see screenshot). I will be making separate posts on themes, icon sets and GNOME shell extensions. So stay subscribed. 

11. Disable Error Reporting

If you find the "application closed unexpectedly" popups annoying, and would like to disable error reporting altogether, this is what you need to do.

sudo gedit /etc/default/apport

This will open up a text editor window which has only one entry: "enabled=1". Change the value to '0' (zero) and you have Apport error reporting completely disabled.


Follow us on Facebook, and Twitter

RIOT OS: A tiny Opensource OS for the 'Internet of Things' (IoT) [Tech Drive-in]

"RIOT powers the Internet of Things like Linux powers the Internet." RIOT is a small, free and opensource operating system for the memory constrained, low power wireless IoT devices.


RIOT OS: A tiny OS for embedded systems

Initially developed by Freie Universität Berlin (FU Berlin), INRIA institute and HAW Hamburg, Riot OS has evolved over the years into a very competent alternative to TinyOS, Contiki etc. and now supports application programming with programming languages such as C and C++, and provides full multithreading and real-time capabilities. RIOT can run on 8-bit, 16-bit and 32-bit ARM Cortex processors.


RIOT is opensource, has its source code published on GitHub, and is based on a microkernel architecture (the bare minimum software required to implement an operating system). RIOT OS vs competition:

riot os for IoT

More information on RIOT OS can be found here. RIOT summits are held annually in major cities of Europe, if you are interested pin this up. Thank you for reading.

IBM, the 6th biggest contributor to Linux Kernel, acquires RedHat for $34 Billion [Tech Drive-in]

The $34 billion all cash deal to purchase opensource pioneer Red Hat is IBM's biggest ever acquisition by far. The deal will give IBM a major foothold in fast-growing cloud computing market and the combined entity could give stiff competition to Amazon's cloud computing platform, AWS. But what about Red Hat and its future?

ibm-redhat

Another Oracle - Sun Micorsystems deal in the making? 
The alarmists among us might be quick to compare the IBM - Red Hat deal with the decade old deal between Oracle Corporation and Sun Microsystems, which was then a major player in opensource software scene.

But fear not. Unlike Oracle (which killed off Sun's OpenSolaris OS almost immediately after acquisition and even started a patent war against Android using Sun's Java patents), IBM is already a major contributor to opensource software including the mighty Linux Kernel. In fact, IBM was the 6th biggest contributor to Linux kernel in 2017.

What's in it for IBM?
With the acquisition of Red Hat, IBM becomes the world's #1 hybrid cloud provider, "offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses", according to Ginni Rometty, IBM Chairman, President and CEO. She adds:

“Most companies today are only 20 percent along their cloud journey, renting compute power to cut costs. The next 80 percent is about unlocking real business value and driving growth. This is the next chapter of the cloud. It requires shifting business applications to hybrid cloud, extracting more data and optimizing every part of the business, from supply chains to sales.”

The Future of Red Hat
The Red Hat story is almost as old as Linux itself. Founded in 1993, RedHat's growth was phenomenal. Over the next two decades Red Hat went on to establish itself as the premier Linux company, and Red Hat OS was the enterprise Linux operating system of choice. It set the benchmark for others like Ubuntu, openSUSE and CentOS to follow. Red Hat is currently the second largest corporate contributor to the Linux kernel after Intel (Intel really stepped-up its Linux Kernel contributions post-2013).

Regular users might be more familiar with Fedora Project, a more user-friendly operating system maintained by Red Hat that competes with mainstream, non-enterprise operating systems like Ubuntu, elementary OS, Linux Mint or even Windows 10 for that matter. Will Red Hat be able to stay independent post acquisition?

According to the official press release, "IBM will remain committed to Red Hat’s open governance, open source contributions, participation in the open source community and development model, and fostering its widespread developer ecosystem. In addition, IBM and Red Hat will remain committed to the continued freedom of open source, via such efforts as Patent Promise, GPL Cooperation Commitment, the Open Invention Network and the LOT Network." Well, that's a huge relief.

In fact, IBM and Red Hat has been partnering each other for over 20 years, with IBM serving as an early supporter of Linux, collaborating with Red Hat to help develop and grow enterprise-grade Linux. And as IBM CEO mentioned, the acquisition is more of an evolution of the long-standing partnership between the two companies.
"Open source is the default choice for modern IT solutions, and I’m incredibly proud of the role Red Hat has played in making that a reality in the enterprise,” said Jim Whitehurst, President and CEO, Red Hat. “Joining forces with IBM will provide us with a greater level of scale, resources and capabilities to accelerate the impact of open source as the basis for digital transformation and bring Red Hat to an even wider audience – all while preserving our unique culture and unwavering commitment to open source innovation."
Predicting the future can be tricky. A lot of things can go wrong. But one thing is sure, the acquisition of Red Hat by IBM is nothing like the Oracle - Sun deal. Between them, IBM and Red Hat must have contributed more to the open source community than any other organization.

How to Upgrade from Ubuntu 18.04 LTS to 18.10 'Cosmic Cuttlefish' [Tech Drive-in]

One day left before the final release of Ubuntu 18.10 codenamed "Cosmic Cuttlefish". This is how you make the upgrade from Ubuntu 18.04 to 18.10.

Upgrade to Ubuntu 18.10 from 18.04

Ubuntu 18.10 has a brand new look!
As you can see from the screenshot, a lot has changed. Ubuntu 18.10 arrives with a major theme overhaul. After almost a decade, the default Ubuntu GTK theme ("Ambiance") is being replaced with a brand new one called "Yaru". The new theme is based heavily on GNOME's default "Adwaita" GTK theme. More on that later.

Upgrade from Ubuntu 18.04 LTS to 18.10
If you're on Ubuntu 18.04 LTS, upgrading to 18.10 "cosmic" is a pretty straight forward affair. Since 18.04 is a long-term support (LTS) release (meaning the OS will get official updates for about 5 years), it may not prompt you with an upgrade option when 18.10 finally arrives. 

So here's how it's done. Disclaimer: back up your critical data before going forward. And better don't try this on mission critical machines. You're on LTS anyway.
  • An up-to-date Ubuntu 18.04 LTS is the first step. Do the following in Terminal.
$ sudo apt update && sudo apt dist-upgrade
$ sudo apt autoremove
  • The first command will check for updates and then proceed with upgrading your Ubuntu 18.04 LTS with the latest updates. The "autoremove" command will clean up any and all dependencies that were installed with applications, and are no longer required.
  • Now the slightly tricky part. You need to edit the /etc/update-manager/release-upgrades file and change the Prompt=never entry to Prompt=normal  or else it will give a "no release found" error message. 
  • I used Vim to make the edit. But for the sake of simplicity, let's use gedit. 
$ sudo gedit /etc/update-manager/release-upgrades
  • Make the edit and save the changes. Now you are ready to go ahead with the upgrade. Make sure your laptop is plugged-in, this will take time. 
  • To be on the safer side, please make sure that there's at least 5GB of disk space left in your home partition (it will prompt you and exit if you don't have enough space required for the upgrade). 
$ sudo do-release-upgrade -d
  • That's it. Wait for a few hours and let it do its magic. 
My upgrade to Ubuntu 18.10 was uneventful. Nothing broke and it all worked like a charm. After the upgrade is done, you're probably still stuck with your old theme. Fire up "Gnome Tweaks" app (get it from App Store if you already haven't), and change the theme and the icons to "Yaru". 

Meet 'Project Fusion': An Attempt to Integrate Tor into Firefox [Tech Drive-in]

A real private mode in Firefox? A Tor integrated Firefox could just be that. Tor Project is currently working with Mozilla to integrate Tor into Firefox.


Over the years, and more so since Cambridge Analytica scandal, Mozilla has taken a progressively tougher stance on user privacy. Firefox's Facebook Container extension, for example, makes it much harder for Facebook to  collect data from your browsing activities (yep, that's a thing. Facebook is tracking your every move on the web). The extension now includes Facebook Messenger and Instagram as well.

Firefox with Tor Integration

For starters, Tor is a free software and an open network for anonymous communication over the web. "Tor protects you by bouncing your communications around a distributed network of relays run by volunteers all around the world: it prevents somebody watching your Internet connection from learning what sites you visit, and it prevents the sites you visit from learning your physical location."

And don't confuse this project with Tor Browser, which is web browser with Tor's elements built on top of Firefox stable builds. Tor Browser in its current form has many limitations. Since it is based on Firefox ESR, it takes a lot of time and effort to rebase the browser with new features from Firefox's stable builds every year or so.

Enter 'Project Fusion'

Now that Mozilla has officially taken over the works of integrating Tor into Firefox through Project Fusion, things could change for the better. With the intention of creating a 'super-private' mode in Firefox that supports First Party Isolation (which prevents cookies from tracking you across domains), Fingerprinting Resistance (which blocks user tracking through canvas elements), and Tor proxy, 'Project Fusion' is aiming big. To put it together, the goals of 'Project Fusion' can be condescend into four points.
  • Implementing fingerprinting resistance, make more user friendly and reduce web breakage.
  • Implement proxy bypass framework.
  • Figure out the best way to integrate Tor proxy into Firefox.
  • Real private browsing mode in Firefox, with First Party Isolation, Fingerprinting Resistance, and Tor proxy.
As good as it sounds, Project Fusion could still be years away or may not happen at all given the complexity of the work. According to a Tor Project Developer at Mozilla:
"Our ultimate goal is a long way away because of the amount of work to do and the necessity to match the safety of Tor Browser in Firefox when providing a Tor mode. There's no guarantee this will happen, but I hope it will and we will keep working towards it."
As If you want to help, Firefox bugs tagged 'fingerprinting' in the whiteboard are a good place to start. Further reading at TOR 'Project Fusion' page.

City of Bern Awards Switzerland's Largest Open Source Contract for its Schools [Tech Drive-in]

In another major win in a span of weeks for the proponents of open source solutions in EU, Bern, the capital of Switzerland, is pushing ahead with its plans to adopt open source tools as its software of choice for all its public schools. If all goes well, some 10,000 students in Switzerland schools could soon start getting their training using an IT infrastructure that is largely open source.

Switzerland's Largest Open Source deal

Over 10,000 Students to Benefit

Switzerland's largest open-source deal introduces a brand new IT infrastructure for the public schools of its capital city. The package includes Colabora Cloud Office, an online version of LibreOffice which is to be hosted in the City of Bern's data center, as its core component. Nextcloud, Kolab, Moodle, and Mahara are the other prominent open source tools included in the package. The contract is worth CHF 13.7 million over 6 years.

In an interview given to 'Der Bund', one of Switzerland's oldest news publications, open-source advocate Matthias Stürmer, EPP city council and IT expert, told that this is probably the largest ever open-source deal in Switzerland.

Many European countries are clamoring to adopt open source solutions for their cities and schools. From the recent German Federal Information Technology Centre's (ITZBund) selection of Nexcloud as their cloud solutions partner, to city of Turin's adoption of Ubuntu, to Italian Military's LibreOffice migration, Europe's recognition of open source solutions as a legitimate alternative is gaining ground.

Ironically enough, most of these software will run on proprietary iOS platform, as the clients given to students will be all Apple iPads. But hey, it had to start somewhere. When Europe's richest countries adopt open source, others will surely take notice. Stay tuned for updates. [via inside-channels.ch]

Germany says No to Public Cloud, Chooses Nextcloud's Open Source Solution [Tech Drive-in]

Germany's Federal Information Technology Centre (ITZBund) opts for an on-premise cloud solution which unlike those fancy Public cloud solutions, is completely private and under its direct control.

Germany's Open Source Migration

Given the recent privacy mishaps at some of biggest public cloud solution providers on the planet, it is only natural that government agencies across the world are opting for solutions that could provide users with more privacy and security. If the recent Facebook - Cambridge Analytica debacle is any indication, data vulnerability has become a serious national security concern for all countries. 

In light of these developments, government of Germany's IT service provider, ITZBund, has chosen Nextcloud as their cloud solutions partner. Nextcloud is a free and open source cloud solutions company based out of Europe that lets you to install and run its software on your private server. ITZBund has been running a pilot since 2016 with some 5000 users on Nextcloud's platform.
"Nextcloud is pleased to announce that the German Federal Information Technology Center (ITZBund) has chosen Nextcloud as their solution for efficient and secure file sharing and collaboration in a public tender. Nextcloud is operated by the ITZBund, the central IT service provider of the federal government, and made available to around 300,000 users. ITZBund uses a Nextcloud Enterprise Subscription to gain access to operational, scaling and security expertise of Nextcloud GmbH as well as long-term support of the software."
ITZBund employs about 2,700 people that include IT specialists, engineers and network and security professionals. After the successful completion of the pilot, a public tender was floated by ITZBund which eventually selected Nextcloud as their preferred partner. Nextcloud scored high on security requirements and scalability, which it addressed through its unique Apps concept.

LG Makes its webOS Operating System Open Source, Again! [Tech Drive-in]

Not many might remember HP's capable webOS. The open source webOS operating system was HP's answer to Android and iOS platforms. It was slick and very user-friendly from the start, some even considered it a better alternative to Android for Tablets at the time. But like many other smaller players, HP's webOS just couldn't find enough takers, and the project was abruptly ended and sold off of to LG.


The Open Source LG webOS

Under the 2013 agreement with HP Inc., LG Electronics had unlimited access to all webOS related documentation and source code. When LG took the project underground, webOS was still an open-source project.

After many years of development, webOS is now LG's platform of choice for its Smart TV division. It is generally considered as one of the better sorted Smart TV user interfaces. LG is now ready to take the platform beyond Smart TVs. LG has developed an open source version of its platform, called webOS Open Source Edition, now available to the public at webosose.org.

Dr. I.P. Park, CTO at LG Electronics had this to say, "webOS has come a long way since then and is now a mature and stable platform ready to move beyond TVs to join the very exclusive group of operating systems that have been successfully commercialization at such a mass level. As we move from an app-based environment to a web-based one, we believe the true potential of webOS has yet to be seen."

By open sourcing webOS, it looks like LG is gunning for Samsung's Tizen OS, which is also open source and built on top of Linux. In our opinion, device manufacturers preferring open platforms (like Automotive Grade Linux), over Android or iOS is a welcome development for the long-term health of the industry in general.

08-02-2024

13-12-2023

10:29

Staat New York doet kerstinkopen bij ASML [Computable]

De Amerikaanse staat New York gaat voor een miljard dollar chipmachines aanschaffen bij ASML. De investering maakt deel uit van een tien miljard kostend plan om nabij de Universiteit van Albany een nanotech-complex neer te zetten.

Sogeti mag verder sleutelen aan datawarehouse KB [Computable]

Sogeti wordt de komende drie jaar opnieuw de datapartner van de Koninklijke Bibliotheek (KB). Met een optie op verlenging tot maximaal zes jaar. Het it-bedrijf is sinds 2016 beheerder van het datawarehouse en krijgt nu als...

HPE haalt gen-ai-banden met Nvidia aan [Computable]

Infrastructuurspecialist Hewlett Packard Enterprise (HPE) gaat nauwer samenwerken met ai-hardware en softwareleverancier Nvidia. Samen bieden ze vanaf januari 2024 een krachtige enterprise computingoplossing voor generatieve artificiële intelligentie (gen-ai).

Econocom kondigt internationale tak aan: Gather [Computable]

De Frans-Belgische it-dienstverlener Econocom heeft een apart, internationaal opererend bedrijfsonderdeel opgezet onder de naam Gather. Deze tak bundelt de expertise op het gebied van audio-visuele oplossingen, unified communications en it-producten en -diensten, gericht op grotere organisaties...

Coalitie: verbeter fietsveiligheid met sensoren [Computable]

De pas opgerichte Coalition for Cyclist Safety, met fietsfabrikant Koninklijke Gazelle aan boord, spant zich in om de fietsveiligheid te verbeteren met behulp van sensortechnologie, ook wel vehicle-to-everything-technologie (v2x) genoemd. De auto-industrie geldt als lichtend voorbeeld;...

12-12-2023

13:39

Ambtenaar mag onder voorwaarden oefenen met gen-ai [Computable]

Het lukt het kabinet niet meer dit jaar nog met een totale visie op generatieve ai (gen-ai) te komen. De Tweede Kamer kan zo’n integraal beeld van de impact die deze technologie heeft op onze maatschappij...

Softwareleverancier Topdesk ontvangt groeigeld [Computable]

Topdesk uit Delft krijgt een kapitaalinjectie van tweehonderd miljoen euro voor groei en verdere ontwikkeling. CVC Capital Partners dat een minderheidsbelang neemt, gaat de leverancier van software voor servicemanagement meer slagkracht bieden.

Vier miljoen voor stimulering datacenter-onderwijs EU [Computable]

De Europese Commissie (EC) heeft een subsidie van vier miljoen euro toegekend aan het project Colleges for European Datacenter Education (Cedce). Doel hiervan is het aanbieden van kwalitatief hoogwaardig onderwijs gericht op datacenters. Het project start...

11-12-2023

22:26

Startup Nedscaper haalt Fox-IT-oprichter aan boord [Computable]

Mede-oprichter van ict-beveiliger Fox-IT, Menno van der Marel, wordt strategisch directeur van Nedscaper. Die Nederlands/Zuid-Afrikaanse startup levert securitydiensten voor Microsoft-omgevingen. Van der Marel steekt ook 2,2 miljoen euro in het bedrijf.

PQR-ceo Marijke Kasius schuift door naar Bechtle [Computable]

Bechtle benoemt per 1 januari Marijke Kasius tot landendirecteur voor de bedrijven van de groep in Nederland. De 39-jarige Kasius geeft momenteel samen met Marco Lesmeister leiding aan it-dienstverlener PQR. Die positie wordt ingenomen door Marc...

Oud-IBM- en Ajax-directeur Frank Kales overleden [Computable]

Frank Kales is op 8 december jongstleden overleden op 81-jarige leeftijd. Hij was onder voetbalkenners bekend als algemeen directeur van voetbalclub Ajax in de turbulente periode 1999-2000. Daarvoor werkte hij decennialang bij IBM waar hij uiteindelijk...

09-12-2023

23:56

EU AI Act jaagt softwarebedrijven op kosten [Computable]

De komst van de uitgebreide en soms ook diepgaande artificiële intelligentie (ai)-regelgeving waartoe EU-onderhandelaars afgelopen nacht overeenstemming hebben bereikt, zal niet zonder financiële gevolgen blijven voor ondernemers. &#39;We hebben een ai-deal. Maar wel een dure,&#39; zegt...

18:10

Historisch ai-akkoord EU legt ChatGPT aan banden [Computable]

De EU AI Act krijgt regels voor de ‘foundation models’ die aan de basis liggen van de enorme vooruitgang op gebied van ai. De Europese Commissie is het afgelopen nacht hierover eens geworden met het Europees...

Eset levert dns-filtering aan KPN-klanten [Computable]

Ict-beveiliger Eset levert domain name system (dns)-filtering aan telecombedrijf KPN. Met deze dienst zouden thuisnetwerken van KPN-klanten beter worden beschermd tegen malware, phishing en ongewenste inhoud.

08-12-2023

18:02

Overheden werken nog niet goed met Woo [Computable]

Overheidsorganisaties passen de nieuwe Wet open overheid (Woo) vaak nog niet effectief toe, voornamelijk door beperkte capaciteit en een gebrek aan prioriteit. Ambtenaren voelen zich bovendien beperkt in hun vrijheid om advies te geven. Dit blijkt...

West-Brabantse scholen helpen mkb via hackathon [Computable]

Studenten van de West-Brabantse onderwijsinstellingen Avans, BUas en Curio gaan ondernemers ondersteunen bij hun digitale ontwikkeling. In de zogeheten Digiwerkplaats Mkb vindt deze vrijdag een hackathon plaats, waarbij twintig Avans-studenten in groepjes een duurzaamheidsdashboard voor drie...

CWI organiseert Cobol-event voor meer urgentie [Computable]

Het Centrum Wiskunde & Informatica (CWI) organiseert 18 januari een evenement over de toekomst van Cobol en mainframes. Voor deze strategische Cobol-dag werkt het centrum samen met Quuks en Software Improvement Group (SIG). Volgens de organisatie...

Plan voor cloud-restricties splijt EU [Computable]

Een groot front vormt zich tegen de plannen van de Europese Commissie voor soevereiniteit-vereisten die vooral Franse cloudbedrijven bevoordelen. Nederland heeft zich in zijn verzet inmiddels verzekerd van de steun van dertien andere EU-lidstaten, waaronder Duitsland....

Unilever kiest weer voor warehousesysteem SAP [Computable]

Wegens de verdubbeling van de productiecapaciteit van de fabriek in het Hongaarse Nyirbator moest Unilever een nieuw plaatselijk, groter magazijn in gebruik nemen. Mét een nieuw warehousemanagementsysteem (wms). De keuze van het levensmiddelenconcern viel wederom op...

Lyvia Group neemt Facility Kwadraat over [Computable]

De Zweedse Lyvia Group pleegt zijn eerste overname in Nederland: Facility Kwadraat. Dit bedrijf uit Den Bosch levert software-as-a-service (saas) voor facility management, meerjarenonderhoud, huurbeheer en vastgoedbeheer.

Adoptie van generatieve ai verloopt traag [Computable]

Ondanks de grote belangstelling maakt een meerderheid van de grote ondernemingen nog geen gebruik van generatieve ai (gen-ai) zoals ChatGPT. Vooral de infrastructuur vormt een barrière bij de implementatie van de grote taalmodellen (llm&#39;s) die aan...

ASM steekt 300 miljoen in Amerikaanse expansie [Computable]

ASM, de toeleverancier van de chipindustrie die tot voor kort ASM International heette, gaat de komende vijf jaar driehonderd miljoen dollar investeren in de uitbreiding van zijn Amerikaanse operaties. De vestiging in Arizona wordt flink uitgebreid.

07-12-2023

10:03

Google met Gemini heel dicht bij OpenAI [Computable]

Met de lancering van Gemini, het grootste en meest ingenieuze artificiële intelligentie (ai)-taalmodel van Google, doet het techbedrijf een aanval op de leidende positie van OpenAI’s GPT-4. Volgens ai-experts is het verschil tussen beide grote taalmodellen...

06-12-2023

21:15

Hack Booking.com stelt reissector voor uitdaging [Computable]

De recente hack gericht op Booking.com zegt alles over de impact van cybercriminaliteit op de hotel- en reissector. Bij de oplichting werden de gegevens van klanten gestolen en te koop aangeboden op het darkweb. Hierbij werden...

13:56

Van Oord brengt klimaatrisico's in kaart [Computable]

Van Oord heeft een opensourcetool ontwikkeld die inzicht moet geven in de klimaatverandering en risico’s die daarmee gepaard gaan. Het bagger- en waterbouwbedrijf wil met die software die meerdere datalagen combineert, wereldwijd kustgebieden en ecosystemen in...

30-08-2021

11:12

Django Authentication Video Tutorial [Simple is Better Than Complex]

Updated at Nov 8, 2018: New video added to the series: How to integrate Django forms with Bootstrap 4.

In this tutorial series, we are going to explore Django’s authentication system by implementing sign up, login, logout, password change, password reset and protected views from non-authenticated users. This tutorial is organized in 8 videos, one for each topic, ranging from 4 min to 15 min each.


Setup

Starting a Django project from scratch, creating a virtual environment and an initial Django app. After that, we are going to setup the templates and create an initial view to start working on the authentication.

If you are already familiar with Django, you can skip this video and jump to the Sign Up tutorial below.


Sign Up

First thing we are going to do is implement a sign up view using the built-in UserCreationForm. In this video you are also going to get some insights on basic Django form processing.


Login

In this video tutorial we are going to first include the built-in Django auth URLs to our project and proceed to implement the login view.


Logout

In this tutorial we are going to include Django logout and also start playing with conditional templates, displaying different content depending if the user is authenticated or not.


Password Change

Next The password change is a view where an authenticated user can change their password.


Password Reset

This tutorial is perhaps the most complicated one, because it involves several views and also sending emails. In this video tutorial you are going to learn how to use the default implementation of the password reset process and how to change the email messages.


Protecting Views

After implementing the whole authentication system, this video gives you an overview on how to protect some views from non authenticated users by using the @login_required decorator and also using class-based views mixins.


Bootstrap 4 Forms

Extra video showing how to integrate Django with Bootstrap 4 and how to use Django Crispy Forms to render Bootstrap forms properly. This video also include some general advices and tips about using Bootstrap 4.


Conclusions

If you want to learn more about Django authentication and some extra stuff related to it, like how to use Bootstrap to make your auth forms look good, or how to write unit tests for your auth-related views, you can read the forth part of my beginners guide to Django: A Complete Beginner’s Guide to Django - Part 4 - Authentication.

Of course the official documentation is the best source of information: Using the Django authentication system

The code used in this tutorial: github.com/sibtc/django-auth-tutorial-example

This was my first time recording this kind of content, so your feedback is highly appreciated. Please let me know what you think!

And don’t forget to subscribe to my YouTube channel! I will post exclusive Django tutorials there. So stay tuned! :-)

09-07-2021

20:56

What You Should Know About The Django User Model [Simple is Better Than Complex]

The goal of this article is to discuss the caveats of the default Django user model implementation and also to give you some advice on how to address them. It is important to know the limitations of the current implementation so to avoid the most common pitfalls.

Something to keep in mind is that the Django user model is heavily based on its initial implementation that is at least 16 years old. Because user and authentication is a core part of the majority of the web applications using Django, most of its quirks persisted on the subsequent releases so to maintain backward compatibility.

The good news is that Django offers many ways to override and customize its default implementation so to fit your application needs. But some of those changes must be done right at the beginning of the project, otherwise it would be too much of a hassle to change the database structure after your application is in production.

Below, the topics that we are going to cover in this article:


User Model Limitations

First, let’s explore the caveats and next we discuss the options.

The username field is case-sensitive

Even though the username field is marked as unique, by default it is not case-sensitive. That means the username john.doe and John.doe identifies two different users in your application.

This can be a security issue if your application has social aspects that builds around the username providing a public URL to a profile like Twitter, Instagram or GitHub for example.

It also delivers a poor user experience because people doesn’t expect that john.doe is a different username than John.Doe, and if the user does not type the username exactly in the same way when they created their account, they might be unable to log in to your application.

Possible Solutions:

  • If you are using PostgreSQL, you can replace the username CharField with the CICharField instead (which is case-insensitive)
  • You can override the method get_by_natural_key from the UserManager to query the database using iexact
  • Create a custom authentication backend based on the ModelBackend implementation

The username field validates against unicode letters

This is not necessarily an issue, but it is important for you to understand what that means and what are the effects.

By default the username field accepts letters, numbers and the characters: @, ., +, -, and _.

The catch here is on which letters it accepts.

For example, joão would be a valid username. Similarly, Джон or 約翰 would also be a valid username.

Django ships with two username validators: ASCIIUsernameValidator and UnicodeUsernameValidator. If the intended behavior is to only accept letters from A-Z, you may want to switch the username validator to use ASCII letters only by using the ASCIIUsernameValidator.

Possible Solutions:

  • Replace the default user model and change the username validator to ASCIIUsernameValidator
  • If you can’t replace the default user model, you can change the validator on the form you use to create/update the user

The email field is not unique

Multiple users can have the same email address associated with their account.

By default the email is used to recover a password. If there is more than one user with the same email address, the password reset will be initiated for all accounts and the user will receive an email for each active account.

It also may not be an issue but this will certainly make it impossible to offer the option to authenticate the user using the email address (like those sites that allow you to login with username or email address).

Possible Solutions:

  • Replace the default user model using the AbstractBaseUser to define the email field from scratch
  • If you can’t replace the user model, enforce the validation on the forms used to create/update

The email field is not mandatory

By default the email field does not allow null, however it allow blank values, so it pretty much allows users to not inform a email address.

Also, this may not be an issue for your application. But if you intend to allow users to log in with email it may be a good idea to enforce the registration of this field.

When using the built-in resources like user creation forms or when using model forms you need to pay attention to this detail if the desired behavior is to always have the user email.

Possible Solutions:

  • Replace the default user model using the AbstractBaseUser to define the email field from scratch
  • If you can’t replace the user model, enforce the validation on the forms used to create/update

A user without password cannot initiate a password reset

There is a small catch on the user creation process that if the set_password method is called passing None as a parameter, it will produce an unusable password. And that also means that the user will be unable to start a password reset to set the first password.

You can end up in that situation if you are using social networks like Facebook or Twitter to allow the user to create an account on your website.

Another way of ending up in this situation is simply by creating a user using the User.objects.create_user() or User.objects.create_superuser() without providing an initial password.

Possible Solutions:

  • If in you user creation flow you allow users to get started without setting a password, remember to pass a random (and lengthy) initial password so the user can later on go through the password reset flow and set an initial password.

Swapping the default user model is very difficult after you created the initial migrations

Changing the user model is something you want to do early on. After your database schema is generated and your database is populated it will be very tricky to swap the user model.

The reason why is that you are likely going to have some foreign key created referencing the user table, also Django internal tables will create hard references to the user table. And if you plan to change that later on you will need to change and migrate the database by yourself.

Possible Solutions:

  • Whenever you are starting a new Django project, always swap the default user model. Even if the default implementation fit all your needs. You can simply extend the AbstractUser and change a single configuration on the settings module. This will give you a tremendous freedom and it will make things way easier in the future should the requirements change.

Detailed Solutions

To address the limitations we discussed in this article we have two options: (1) implement workarounds to fix the behavior of the default user model; (2) replace the default user model altogether and fix the issues for good.

What is going to dictate what approach you need to use is in what stage your project currently is.

  • If you have an existing project running in production that is using the default django.contrib.auth.models.User, go with the first solution implementing the workarounds;
  • If you are just starting your Django, start with the right foot and go with the solution number 2.

Workarounds

First let’s have a look on a few workarounds that you can implement if you project is already in production. Keep in mind that those solutions assume that you don’t have direct access to the User model, that is, you are currently using the default User model importing it from django.contrib.auth.models.

If you did replace the User model, then jump to the next section to get better tips on how to fix the issues.

Making username field case-insensitive

Before making any changes you need to make sure you don’t have conflicting usernames on your database. For example, if you have a User with the username maria and another with the username Maria you have to plan a data migration first. It is difficult to tell you what to do because it really depends on how you want to handle it. One option is to append some digits after the username, but that can disturb the user experience.

Now let’s say you checked your database and there are no conflicting usernames and you are good to go.

First thing you need to do is to protect your sign up forms to not allow conflicting usernames to create accounts.

Then on your user creation form, used to sign up, you could validate the username like this:

def clean_username(self):
    username = self.cleaned_data.get("username")
    if User.objects.filter(username__iexact=username).exists():
        self.add_error("username", "A user with this username already exists.")
    return username

If you are handling user creation in a rest API using DRF, you can do something similar in your serializer:

def validate_username(self, value):
    if User.objects.filter(username__iexact=value).exists():
        raise serializers.ValidationError("A user with this username already exists.")
    return value

In the previous example the mentioned ValidationError is the one defined in the DRF.

The iexact notation on the queryset parameter will query the database ignoring the case.

Now that the user creation is sanitized we can proceed to define a custom authentication backend.

Create a module named backends.py anywhere in your project and add the following snippet:

backends.py

from django.contrib.auth import get_user_model
from django.contrib.auth.backends import ModelBackend


class CaseInsensitiveModelBackend(ModelBackend):
    def authenticate(self, request, username=None, password=None, **kwargs):
        UserModel = get_user_model()
        if username is None:
            username = kwargs.get(UserModel.USERNAME_FIELD)
        try:
            case_insensitive_username_field = '{}__iexact'.format(UserModel.USERNAME_FIELD)
            user = UserModel._default_manager.get(**{case_insensitive_username_field: username})
        except UserModel.DoesNotExist:
            # Run the default password hasher once to reduce the timing
            # difference between an existing and a non-existing user (#20760).
            UserModel().set_password(password)
        else:
            if user.check_password(password) and self.user_can_authenticate(user):
                return user

Now switch the authentication backend in the settings.py module:

settings.py

AUTHENTICATION_BACKENDS = ('mysite.core.backends.CaseInsensitiveModelBackend', )

Please note that 'mysite.core.backends.CaseInsensitiveModelBackend' must be changed to the valid path, where you created the backends.py module.

It is important to have handled all conflicting users before changing the authentication backend because otherwise it could raise a 500 exception MultipleObjectsReturned.

Fixing the username validation to use accept ASCII letters only

Here we can borrow the built-in UsernameField and customize it to append the ASCIIUsernameValidator to the list of validators:

from django.contrib.auth.forms import UsernameField
from django.contrib.auth.validators import ASCIIUsernameValidator

class ASCIIUsernameField(UsernameField):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.validators.append(ASCIIUsernameValidator())

Then on the Meta of your User creation form you can replace the form field class:

class UserCreationForm(forms.ModelForm):
    # field definitions...

    class Meta:
        model = User
        fields = ("username",)
        field_classes = {'username': ASCIIUsernameField}
Fixing the email uniqueness and making it mandatory

Here all you can do is to sanitize and handle the user input in all views where you user can modify its email address.

You have to include the email field on your sign up form/serializer as well.

Then just make it mandatory like this:

class UserCreationForm(forms.ModelForm):
    email = forms.EmailField(required=True)
    # other field definitions...

    class Meta:
        model = User
        fields = ("username",)
        field_classes = {'username': ASCIIUsernameField}

    def clean_email(self):
        email = self.cleaned_data.get("email")
        if User.objects.filter(email__iexact=email).exists():
            self.add_error("email", _("A user with this email already exists."))
        return email

You can also check a complete and detailed example of this form on the project shared together with this post: userworkarounds

Replacing the default User model

Now I’m going to show you how I usually like to extend and replace the default User model. It is a little bit verbose but that is the strategy that will allow you to access all the inner parts of the User model and make it better.

To replace the User model you have two options: extending the AbstractBaseUser or extending the AbstractUser.

To illustrate what that means I draw the following diagram of how the default Django model is implemented:

User Model Diagram

The green circle identified with the label User is actually the one you import from django.contrib.auth.models and that is the implementation that we discussed in this article.

If you look at the source code, its implementation looks like this:

class User(AbstractUser):
    class Meta(AbstractUser.Meta):
        swappable = 'AUTH_USER_MODEL'

So basically it is just an implementation of the AbstractUser. Meaning all the fields and logic are implemented in the abstract class.

It is done that way so we can easily extend the User model by creating a sub-class of the AbstractUser and add other features and fields you like.

But there is a limitation that you can’t override an existing model field. For example, you can re-define the email field to make it mandatory or to change its length.

So extending the AbstractUser class is only useful when you want to modify its methods, add more fields or swap the objects manager.

If you want to remove a field or change how the field is defined, you have to extend the user model from the AbstractBaseUser.

The best strategy to have full control over the user model is creating a new concrete class from the PermissionsMixin and the AbstractBaseUser.

Note that the PermissionsMixin is only necessary if you intend to use the Django admin or the built-in permissions framework. If you are not planning to use it you can leave it out. And in the future if things change you can add the mixin and migrate the model and you are ready to go.

So the implementation strategy looks like this:

Custom User Model Diagram

Now I’m going to show you my go-to implementation. I always use PostgreSQL which, in my opinion, is the best database to use with Django. At least it is the one with most support and features anyway. So I’m going to show an approach that use the PostgreSQL’s CITextExtension. Then I will show some options if you are using other database engines.

For this implementation I always create an app named accounts:

django-admin startapp accounts

Then before adding any code I like to create an empty migration to install the PostgreSQL extensions that we are going to use:

python manage.py makemigrations accounts --empty --name="postgres_extensions"

Inside the migrations directory of the accounts app you will find an empty migration called 0001_postgres_extensions.py.

Modify the file to include the extension installation:

migrations/0001_postgres_extensions.py

from django.contrib.postgres.operations import CITextExtension
from django.db import migrations

class Migration(migrations.Migration):

    dependencies = [
    ]

    operations = [
        CITextExtension()
    ]

Now let’s implement our model. Open the models.py file inside the accounts app.

I always grab the initial code directly from Django’s source on GitHub, copying the AbstractUser implementation, and modify it accordingly:

accounts/models.py

from django.contrib.auth.base_user import AbstractBaseUser
from django.contrib.auth.models import PermissionsMixin, UserManager
from django.contrib.auth.validators import ASCIIUsernameValidator
from django.contrib.postgres.fields import CICharField, CIEmailField
from django.core.mail import send_mail
from django.db import models
from django.utils import timezone
from django.utils.translation import gettext_lazy as _


class CustomUser(AbstractBaseUser, PermissionsMixin):
    username_validator = ASCIIUsernameValidator()

    username = CICharField(
        _("username"),
        max_length=150,
        unique=True,
        help_text=_("Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only."),
        validators=[username_validator],
        error_messages={
            "unique": _("A user with that username already exists."),
        },
    )
    first_name = models.CharField(_("first name"), max_length=150, blank=True)
    last_name = models.CharField(_("last name"), max_length=150, blank=True)
    email = CIEmailField(
        _("email address"),
        unique=True,
        error_messages={
            "unique": _("A user with that email address already exists."),
        },
    )
    is_staff = models.BooleanField(
        _("staff status"),
        default=False,
        help_text=_("Designates whether the user can log into this admin site."),
    )
    is_active = models.BooleanField(
        _("active"),
        default=True,
        help_text=_(
            "Designates whether this user should be treated as active. Unselect this instead of deleting accounts."
        ),
    )
    date_joined = models.DateTimeField(_("date joined"), default=timezone.now)

    objects = UserManager()

    EMAIL_FIELD = "email"
    USERNAME_FIELD = "username"
    REQUIRED_FIELDS = ["email"]

    class Meta:
        verbose_name = _("user")
        verbose_name_plural = _("users")

    def clean(self):
        super().clean()
        self.email = self.__class__.objects.normalize_email(self.email)

    def get_full_name(self):
        """
        Return the first_name plus the last_name, with a space in between.
        """
        full_name = "%s %s" % (self.first_name, self.last_name)
        return full_name.strip()

    def get_short_name(self):
        """Return the short name for the user."""
        return self.first_name

    def email_user(self, subject, message, from_email=None, **kwargs):
        """Send an email to this user."""
        send_mail(subject, message, from_email, [self.email], **kwargs)

Let’s review what we changed here:

  • We switched the username_validator to use ASCIIUsernameValidator
  • The username field now is using CICharField which is not case-sensitive
  • The email field is now mandatory, unique and is using CIEmailField which is not case-sensitive

On the settings module, add the following configuration:

settings.py

AUTH_USER_MODEL = "accounts.CustomUser"

Now we are ready to create our migrations:

python manage.py makemigrations 

Apply the migrations:

python manage.py migrate

And you should get a similar result if you are just creating your project and if there is no other models/apps:

Operations to perform:
  Apply all migrations: accounts, admin, auth, contenttypes, sessions
Running migrations:
  Applying contenttypes.0001_initial... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0001_initial... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying auth.0008_alter_user_username_max_length... OK
  Applying auth.0009_alter_user_last_name_max_length... OK

If you check your database scheme you will see that there is no auth_user table (which is the default one), and now the user is stored on the table accounts_customuser:

Database Scheme

And all the Foreign Keys to the user model will be created pointing to this table. That’s why it is important to do it right in the beginning of your project, before you created the database scheme.

Now you have all the freedom. You can replace the first_name and last_name and use just one field called name. You could remove the username field and identify your User model with the email (then just make sure you change the property USERNAME_FIELD to email).

You can grab the source code on GitHub: customuser

Handling case-insensitive without PostgreSQL

If you are not using PostgreSQL and want to implement case-insensitive authentication and you have direct access to the User model, a nice hack is to create a custom manager for the User model, like this:

accounts/models.py

from django.contrib.auth.models import AbstractUser, UserManager

class CustomUserManager(UserManager):
    def get_by_natural_key(self, username):
        case_insensitive_username_field = '{}__iexact'.format(self.model.USERNAME_FIELD)
        return self.get(**{case_insensitive_username_field: username})

class CustomUser(AbstractBaseUser, PermissionsMixin):
    # all the fields, etc...

    objects = CustomUserManager()

    # meta, methods, etc...

Then you could also sanitize the username field on the clean() method to always save it as lowercase so you don’t have to bother having case variant/conflicting usernames:

def clean(self):
    super().clean()
    self.email = self.__class__.objects.normalize_email(self.email)
    self.username = self.username.lower()

Conclusions

In this tutorial we discussed a few caveats of the default User model implementation and presented a few options to address those issues.

The takeaway message here is: always replace the default User model.

If your project is already in production, don’t panic: there are ways to fix those issues following the recommendations in this post.

I also have two detailed blog posts on how to make the username field case-insensitive and other about how to extend the django user model:

You can also explore the source code presented in this post on GitHub:

27-06-2021

09:33

How to Start a Production-Ready Django Project [Simple is Better Than Complex]

In this tutorial I’m going to show you how I usually start and organize a new Django project nowadays. I’ve tried many different configurations and ways to organize the project, but for the past 4 years or so this has been consistently my go-to setup.

Please note that this is not intended to be a “best practice” guide or to fit every use case. It’s just the way I like to use Django and that’s also the way that I found that allow your project to grow in healthy way.

Index


Premises

Usually those are the premises I take into account when setting up a project:

  • Separation of code and configuration
  • Multiple environments (production, staging, development, local)
  • Local/development environment first
  • Internationalization and localization
  • Testing and documentation
  • Static checks and styling rules
  • Not all apps must be pluggable
  • Debugging and logging

Environments/Modes

Usually I work with three environment dimensions in my code: local, tests and production. I like to see it as a “mode” how I run the project. What dictates which mode I’m running the project is which settings.py I’m currently using.

Local

The local dimension always come first. It is the settings and setup that a developer will use on their local machine.

All the defaults and configurations must be done to attend the local development environment first.

The reason why I like to do it that way is that the project must be as simple as possible for a new hire to clone the repository, run the project and start coding.

The production environment usually will be configured and maintained by experienced developers and by those who are more familiar with the code base itself. And because the deployment should be automated, there is no reason for people being re-creating the production server over and over again. So it is perfectly fine for the production setup require a few extra steps and configuration.

Tests

The tests environment will be also available locally, so developers can test the code and run the static checks.

But the idea of the tests environment is to expose it to a CI environment like Travis CI, Circle CI, AWS Code Pipeline, etc.

It is a simple setup that you can install the project and run all the unit tests.

Production

The production dimension is the real deal. This is the environment that goes live without the testing and debugging utilities.

I also use this “mode” or dimension to run the staging server.

A staging server is where you roll out new features and bug fixes before applying to the production server.

The idea here is that your staging server should run in production mode, and the only difference is going to be your static/media server and database server. And this can be achieved just by changing the configuration to tell what is the database connection string for example.

But the main thing is that you should not have any conditional in your code that checks if it is the production or staging server. The project should run exactly in the same way as in production.


Project Configuration

Right from the beginning it is a good idea to setup a remote version control service. My go-to option is Git on GitHub. Usually I create the remote repository first then clone it on my local machine to get started.

Let’s say our project is called simple, after creating the repository on GitHub I will create a directory named simple on my local machine, then within the simple directory I will clone the repository, like shown on the structure below:

simple/
└── simple/  (git repo)

Then I create the virtualenv outside of the Git repository:

simple/
├── simple/
└── venv/

Then alongside the simple and venv directories I may place some other support files related to the project which I do not plan to commit to the Git repository.

The reason I do that is because it is more convenient to destroy and re-create/re-clone both the virtual environment or the repository itself.

It is also good to store your virtual environment outside of the git repository/project root so you don’t need to bother ignoring its path when using libs like flake8, isort, black, tox, etc.

You can also use tools like virtualenvwrapper to manage your virtual environments, but I prefer doing it that way because everything is in one place. And if I no longer need to keep a given project on my local machine, I can delete it completely without leaving behind anything related to the project on my machine.

The next step is installing Django inside the virtualenv so we can use the django-admin commands.

source venv/bin/activate
pip install django

Inside the simple directory (where the git repository was cloned) start a new project:

django-admin startproject simple .

Attention to the . in the end of the command. It is necessary to not create yet another directory called simple.

So now the structure should be something like this:

simple/                   <- (1) Wrapper directory with all project contents including the venv
├── simple/               <- (2) Project root and git repository
│   ├── .git/
│   ├── manage.py
│   └── simple/           <- (3) Project package, apps, templates, static, etc
│       ├── __init__.py
│       ├── asgi.py
│       ├── settings.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

At this point I already complement the project package directory with three extra directories for templates, static and locale.

Both templates and static we are going to manage at a project-level and app-level. Those are refer to the global templates and static files.

The locale is necessary in case you are using i18n to translate your application to other languages. So here is where you are going to store the .mo and .po files.

So the structure now should be something like this:

simple/
├── simple/
│   ├── .git/
│   ├── manage.py
│   └── simple/
│       ├── locale/
│       ├── static/
│       ├── templates/
│       ├── __init__.py
│       ├── asgi.py
│       ├── settings.py
│       ├── urls.py
│       └── wsgi.py
└── venv/
Requirements

Inside the project root (2) I like to create a directory called requirements with all the .txt files, breaking down the project dependencies like this:

  • base.txt: Main dependencies, strictly necessary to make the project run. Common to all environments
  • tests.txt: Inherits from base.txt + test utilities
  • local.txt: Inherits from tests.txt + development utilities
  • production.txt: Inherits from base.txt + production only dependencies

Note that I do not have a staging.txt requirements file, that’s because the staging environment is going to use the production.txt requirements so we have an exact copy of the production environment.

simple/
├── simple/
│   ├── .git/
│   ├── manage.py
│   ├── requirements/
│   │   ├── base.txt
│   │   ├── local.txt
│   │   ├── production.txt
│   │   └── tests.txt
│   └── simple/
│       ├── locale/
│       ├── static/
│       ├── templates/
│       ├── __init__.py
│       ├── asgi.py
│       ├── settings.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

Now let’s have a look inside each of those requirements file and what are the python libraries that I always use no matter what type of Django project I’m developing.

base.txt

dj-database-url==0.5.0
Django==3.2.4
psycopg2-binary==2.9.1
python-decouple==3.4
pytz==2021.1
  • dj-database-url: This is a very handy Django library to create an one line database connection string which is convenient for storing in .env files in a safe way
  • Django: Django itself
  • psycopg2-binary: PostgreSQL is my go-to database when working with Django. So I always have it here for all my environments
  • python-decouple: A typed environment variable manager to help protect sensitive data that goes to your settings.py module. It also helps with decoupling configuration from source code
  • pytz: For timezone aware datetime fields

tests.txt

-r base.txt

black==21.6b0
coverage==5.5
factory-boy==3.2.0
flake8==3.9.2
isort==5.9.1
tox==3.23.1

The -r base.txt inherits all the requirements defined in the base.txt file

  • black: A Python auto-formatter so you don’t have to bother with styling and formatting your code. It let you focus on what really matters while coding and doing code reviews
  • coverage: Lib to generate test coverage reports of your project
  • factory-boy: A model factory to help you setup complex test cases where the code you are testing rely on multiple models being set in a certain way
  • flake8: Checks for code complexity, PEPs, formatting rules, etc
  • isort: Auto-formatter for your imports so all imports are organized by blocks (standard library, Django, third-party, first-party, etc)
  • tox: I use tox as an interface for CI tools to run all code checks and unit tests

local.txt

-r tests.txt

django-debug-toolbar==3.2.1
ipython==7.25.0

The -r tests.txt inherits all the requirements defined in the base.txt and tests.txt file

  • django-debug-toolbar: 99% of the time I use it to debug the query count on complex views so you can optimize your database access
  • ipython: Improved Python shell. I use it all the time during the development phase to start some implementation or to inspect code

production.txt

-r base.txt

gunicorn==20.1.0
sentry-sdk==1.1.0

The -r base.txt inherits all the requirements defined in the base.txt file

  • gunicorn: A Python WSGI HTTP server for production used behind a proxy server like Nginx
  • sentry-sdk: Error reporting/logging tool to catch exceptions raised in production
Settings

Also following the environments and modes premise I like to setup multiple settings modules. Those are going to serve as the entry point to determine in which mode I’m running the project.

Inside the simple project package, I create a new directory called settings and break down the files like this:

simple/                       (1)
├── simple/                   (2)
│   ├── .git/
│   ├── manage.py
│   ├── requirements/
│   │   ├── base.txt
│   │   ├── local.txt
│   │   ├── production.txt
│   │   └── tests.txt
│   └── simple/              (3)
│       ├── locale/
│       ├── settings/
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── local.py
│       │   ├── production.py
│       │   └── tests.py
│       ├── static/
│       ├── templates/
│       ├── __init__.py
│       ├── asgi.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

Note that I removed the settings.py that used to live inside the simple/ (3) directory.

The majority of the code will live inside the base.py settings module.

Everything that we can set only once in the base.py and change its value using python-decouple we should keep in the base.py and never repeat/override in the other settings modules.

After the removal of the main settings.py a nice touch is to modify the manage.py file to set the local.py as the default settings module so we can still run commands like python manage.py runserver without any further parameters:

manage.py

#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys


def main():
    """Run administrative tasks."""
    os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'simple.settings.local')  # <- here!
    try:
        from django.core.management import execute_from_command_line
    except ImportError as exc:
        raise ImportError(
            "Couldn't import Django. Are you sure it's installed and "
            "available on your PYTHONPATH environment variable? Did you "
            "forget to activate a virtual environment?"
        ) from exc
    execute_from_command_line(sys.argv)


if __name__ == '__main__':
    main()

Now let’s have a look on each of those settings modules.

base.py

scroll to see all the file contents
from pathlib import Path

import dj_database_url
from decouple import Csv, config

BASE_DIR = Path(__file__).resolve().parent.parent


# ==============================================================================
# CORE SETTINGS
# ==============================================================================

SECRET_KEY = config("SECRET_KEY", default="django-insecure$simple.settings.local")

DEBUG = config("DEBUG", default=True, cast=bool)

ALLOWED_HOSTS = config("ALLOWED_HOSTS", default="127.0.0.1,localhost", cast=Csv())

INSTALLED_APPS = [
    "django.contrib.admin",
    "django.contrib.auth",
    "django.contrib.contenttypes",
    "django.contrib.sessions",
    "django.contrib.messages",
    "django.contrib.staticfiles",
]

DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"

ROOT_URLCONF = "simple.urls"

INTERNAL_IPS = ["127.0.0.1"]

WSGI_APPLICATION = "simple.wsgi.application"


# ==============================================================================
# MIDDLEWARE SETTINGS
# ==============================================================================

MIDDLEWARE = [
    "django.middleware.security.SecurityMiddleware",
    "django.contrib.sessions.middleware.SessionMiddleware",
    "django.middleware.common.CommonMiddleware",
    "django.middleware.csrf.CsrfViewMiddleware",
    "django.contrib.auth.middleware.AuthenticationMiddleware",
    "django.contrib.messages.middleware.MessageMiddleware",
    "django.middleware.clickjacking.XFrameOptionsMiddleware",
]


# ==============================================================================
# TEMPLATES SETTINGS
# ==============================================================================

TEMPLATES = [
    {
        "BACKEND": "django.template.backends.django.DjangoTemplates",
        "DIRS": [BASE_DIR / "templates"],
        "APP_DIRS": True,
        "OPTIONS": {
            "context_processors": [
                "django.template.context_processors.debug",
                "django.template.context_processors.request",
                "django.contrib.auth.context_processors.auth",
                "django.contrib.messages.context_processors.messages",
            ],
        },
    },
]


# ==============================================================================
# DATABASES SETTINGS
# ==============================================================================

DATABASES = {
    "default": dj_database_url.config(
        default=config("DATABASE_URL", default="postgres://simple:simple@localhost:5432/simple"),
        conn_max_age=600,
    )
}


# ==============================================================================
# AUTHENTICATION AND AUTHORIZATION SETTINGS
# ==============================================================================

AUTH_PASSWORD_VALIDATORS = [
    {
        "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
    },
    {
        "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
    },
    {
        "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
    },
    {
        "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
    },
]


# ==============================================================================
# I18N AND L10N SETTINGS
# ==============================================================================

LANGUAGE_CODE = config("LANGUAGE_CODE", default="en-us")

TIME_ZONE = config("TIME_ZONE", default="UTC")

USE_I18N = True

USE_L10N = True

USE_TZ = True

LOCALE_PATHS = [BASE_DIR / "locale"]


# ==============================================================================
# STATIC FILES SETTINGS
# ==============================================================================

STATIC_URL = "/static/"

STATIC_ROOT = BASE_DIR.parent.parent / "static"

STATICFILES_DIRS = [BASE_DIR / "static"]

STATICFILES_FINDERS = (
    "django.contrib.staticfiles.finders.FileSystemFinder",
    "django.contrib.staticfiles.finders.AppDirectoriesFinder",
)


# ==============================================================================
# MEDIA FILES SETTINGS
# ==============================================================================

MEDIA_URL = "/media/"

MEDIA_ROOT = BASE_DIR.parent.parent / "media"



# ==============================================================================
# THIRD-PARTY SETTINGS
# ==============================================================================


# ==============================================================================
# FIRST-PARTY SETTINGS
# ==============================================================================

SIMPLE_ENVIRONMENT = config("SIMPLE_ENVIRONMENT", default="local")

A few comments on the overall base settings file contents:

  • The config() are from the python-decouple library. It is exposing the configuration to an environment variable and retrieving its value accordingly to the expected data type. Read more about python-decouple on this guide: How to Use Python Decouple
  • See how configurations like SECRET_KEY, DEBUG and ALLOWED_HOSTS defaults to local/development environment values. That means a new developer won’t need to set a local .env and provide some initial value to run locally
  • On the database settings block we are using the dj_database_url to translate this one line string to a Python dictionary as Django expects
  • Note that how on the MEDIA_ROOT we are navigating two directories up to create a media directory outside the git repository but inside our project workspace (inside the directory simple/ (1)). So everything is handy and we won’t be committing test uploads to our repository
  • In the end of the base.py settings I reserve two blocks for third-party Django libraries that I may install, such as Django Rest Framework or Django Crispy Forms. And the first-party settings refer to custom settings that I may create exclusively for our project. Usually I will prefix them with the project name, like SIMPLE_XXX

local.py

# flake8: noqa

from .base import *

INSTALLED_APPS += ["debug_toolbar"]

MIDDLEWARE.insert(0, "debug_toolbar.middleware.DebugToolbarMiddleware")


# ==============================================================================
# EMAIL SETTINGS
# ==============================================================================

EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"

Here is where I will setup Django Debug Toolbar for example. Or set the email backend to display the sent emails on console instead of having to setup a valid email server to work on the project.

All the code that is only relevant for the development process goes here.

You can use it to setup other libs like Django Silk to run profiling without exposing it to production.

tests.py

# flake8: noqa

from .base import *

PASSWORD_HASHERS = ["django.contrib.auth.hashers.MD5PasswordHasher"]


class DisableMigrations:
    def __contains__(self, item):
        return True

    def __getitem__(self, item):
        return None


MIGRATION_MODULES = DisableMigrations()

Here I add configurations that help us run the test cases faster. Sometimes disabling the migrations may not work if you have interdependencies between the apps models so Django may fail to create a database without the migrations.

In some projects it is better to keep the test database after the execution.

production.py

# flake8: noqa

import sentry_sdk
from sentry_sdk.integrations.django import DjangoIntegration

import simple
from .base import *

# ==============================================================================
# SECURITY SETTINGS
# ==============================================================================

CSRF_COOKIE_SECURE = True
CSRF_COOKIE_HTTPONLY = True

SECURE_HSTS_SECONDS = 60 * 60 * 24 * 7 * 52  # one year
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_SSL_REDIRECT = True
SECURE_BROWSER_XSS_FILTER = True
SECURE_CONTENT_TYPE_NOSNIFF = True
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")

SESSION_COOKIE_SECURE = True


# ==============================================================================
# THIRD-PARTY APPS SETTINGS
# ==============================================================================

sentry_sdk.init(
    dsn=config("SENTRY_DSN", default=""),
    environment=SIMPLE_ENVIRONMENT,
    release="simple@%s" % simple.__version__,
    integrations=[DjangoIntegration()],
)

The most important part here on the production settings is to enable all the security settings Django offer. I like to do it that way because you can’t run the development server with most of those configurations on.

The other thing is the Sentry configuration.

Note the simple.__version__ on the release. Next we are going to explore how I usually manage the version of the project.

Version

I like to reuse Django’s get_version utility for a simple and PEP 440 complaint version identification.

Inside the project’s __init__.py module:

simple/
├── simple/
│   ├── .git/
│   ├── manage.py
│   ├── requirements/
│   └── simple/
│       ├── locale/
│       ├── settings/
│       ├── static/
│       ├── templates/
│       ├── __init__.py     <-- here!
│       ├── asgi.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

You can do something like this:

from django import get_version

VERSION = (1, 0, 0, "final", 0)

__version__ = get_version(VERSION)

The only down side of using the get_version directly from the Django module is that it won’t be able to resolve the git hash for alpha versions.

A possible solution is making a copy of the django/utils/version.py file to your project, and then you import it locally, so it will be able to identify your git repository within the project folder.

But it also depends what kind of versioning you are using for your project. If the version of your project is not really relevant to the end user and you want to keep track of it for internal management like to identify the release on a Sentry issue, you could use a date-based release versioning.


Apps Configuration

A Django app is a Python package that you “install” using the INSTALLED_APPS in your settings file. An app can live pretty much anywhere: inside or outside the project package or even in a library that you installed using pip.

Indeed, your Django apps may be reusable on other projects. But that doesn’t mean it should. Don’t let it destroy your project design or don’t get obsessed over it. Also, it shouldn’t necessarily represent a “part” of your website/web application.

It is perfectly fine for some apps to not have models, or other apps have only views. Some of your modules doesn’t even need to be a Django app at all. I like to see my Django projects as a big Python package and organize it in a way that makes sense, and not try to place everything inside reusable apps.

The general recommendation of the official Django documentation is to place your apps in the project root (alongside the manage.py file, identified here in this tutorial by the simple/ (2) folder).

But actually I prefer to create my apps inside the project package (identified in this tutorial by the simple/ (3) folder). I create a module named apps and then inside the apps I create my Django apps. The main reason why is that it creates a nice namespace for the app. It helps you easily identify that a particular import is part of your project. Also this namespace helps when creating logging rules to handle events in a different way.

Here is an example of how I do it:

simple/                      (1)
├── simple/                  (2)
│   ├── .git/
│   ├── manage.py
│   ├── requirements/
│   └── simple/              (3)
│       ├── apps/            <-- here!
│       │   ├── __init__.py
│       │   ├── accounts/
│       │   └── core/
│       ├── locale/
│       ├── settings/
│       ├── static/
│       ├── templates/
│       ├── __init__.py
│       ├── asgi.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

In the example above the folders accounts/ and core/ are Django apps created with the command django-admin startapp.

Those two apps are also always in my project. The accounts app is the one that I use the replace the default Django User model and also the place where I eventually create password reset, account activation, sign ups, etc.

The core app I use for general/global implementations. For example to define a model that will be used across most of the other apps. I try to keep it decoupled from other apps, not importing other apps resources. It usually is a good place to implement general purpose or reusable views and mixins.

Something to pay attention when using this approach is that you need to change the name of the apps configuration, inside the apps.py file of the Django app:

accounts/apps.py

from django.apps import AppConfig

class AccountsConfig(AppConfig):
    default_auto_field = 'django.db.models.BigAutoField'
    name = 'accounts'  # <- this is the default name created by the startapp command

You should rename it like this, to respect the namespace:

from django.apps import AppConfig

class AccountsConfig(AppConfig):
    default_auto_field = 'django.db.models.BigAutoField'
    name = 'simple.apps.accounts'  # <- change to this!

Then on your INSTALLED_APPS you are going to create a reference to your models like this:

INSTALLED_APPS = [
    "django.contrib.admin",
    "django.contrib.auth",
    "django.contrib.contenttypes",
    "django.contrib.sessions",
    "django.contrib.messages",
    "django.contrib.staticfiles",
    
    "simple.apps.accounts",
    "simple.apps.core",
]

The namespace also helps to organize your INSTALLED_APPS making your project apps easily recognizable.

App Structure

This is what my app structure looks like:

simple/                              (1)
├── simple/                          (2)
│   ├── .git/
│   ├── manage.py
│   ├── requirements/
│   └── simple/                      (3)
│       ├── apps/
│       │   ├── accounts/            <- My app structure
│       │   │   ├── migrations/
│       │   │   │   └── __init__.py
│       │   │   ├── static/
│       │   │   │   └── accounts/
│       │   │   ├── templates/
│       │   │   │   └── accounts/
│       │   │   ├── tests/
│       │   │   │   ├── __init__.py
│       │   │   │   └── factories.py
│       │   │   ├── __init__.py
│       │   │   ├── admin.py
│       │   │   ├── apps.py
│       │   │   ├── constants.py
│       │   │   ├── models.py
│       │   │   └── views.py
│       │   ├── core/
│       │   └── __init__.py
│       ├── locale/
│       ├── settings/
│       ├── static/
│       ├── templates/
│       ├── __init__.py
│       ├── asgi.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

The first thing I do is create a folder named tests so I can break down my tests into several files. I always add a factories.py to create my model factories using the factory-boy library.

For both static and templates always create first a directory with the same name as the app to avoid name collisions when Django collect all static files and try to resolve the templates.

The admin.py may be there or not depending if I’m using the Django Admin contrib app.

Other common modules that you may have is a utils.py, forms.py, managers.py, services.py etc.


Code style and formatting

Now I’m going to show you the configuration that I use for tools like isort, black, flake8, coverage and tox.

Editor Config

The .editorconfig file is a standard recognized by all major IDEs and code editors. It helps the editor understand what is the file formatting rules used in the project.

It tells the editor if the project is indented with tabs or spaces. How many spaces/tabs. What’s the max length for a line of code.

I like to use Django’s .editorconfig file. Here is what it looks like:

.editorconfig

# https://editorconfig.org/

root = true

[*]
indent_style = space
indent_size = 4
insert_final_newline = true
trim_trailing_whitespace = true
end_of_line = lf
charset = utf-8

# Docstrings and comments use max_line_length = 79
[*.py]
max_line_length = 119

# Use 2 spaces for the HTML files
[*.html]
indent_size = 2

# The JSON files contain newlines inconsistently
[*.json]
indent_size = 2
insert_final_newline = ignore

[**/admin/js/vendor/**]
indent_style = ignore
indent_size = ignore

# Minified JavaScript files shouldn't be changed
[**.min.js]
indent_style = ignore
insert_final_newline = ignore

# Makefiles always use tabs for indentation
[Makefile]
indent_style = tab

# Batch files use tabs for indentation
[*.bat]
indent_style = tab

[docs/**.txt]
max_line_length = 79

[*.yml]
indent_size = 2
Flake8

Flake8 is a Python library that wraps PyFlakes, pycodestyle and Ned Batchelder’s McCabe script. It is a great toolkit for checking your code base against coding style (PEP8), programming errors (like “library imported but unused” and “Undefined name”) and to check cyclomatic complexity.

To learn more about flake8, check this tutorial I posted a while a go: How to Use Flake8.

setup.cfg

[flake8]
exclude = .git,.tox,*/migrations/*
max-line-length = 119
isort

isort is a Python utility / library to sort imports alphabetically, and automatically separated into sections.

To learn more about isort, check this tutorial I posted a while a go: How to Use Python isort Library.

setup.cfg

[isort]
force_grid_wrap = 0
use_parentheses = true
combine_as_imports = true
include_trailing_comma = true
line_length = 119
multi_line_output = 3
skip = migrations
default_section = THIRDPARTY
known_first_party = simple
known_django = django
sections=FUTURE,STDLIB,DJANGO,THIRDPARTY,FIRSTPARTY,LOCALFOLDER

Pay attention to the known_first_party, it should be the name of your project so isort can group your project’s imports.

Black

Black is a life changing library to auto-format your Python applications. There is no way I’m coding with Python nowadays without using Black.

Here is the basic configuration that I use:

pyproject.toml

[tool.black]
line-length = 119
target-version = ['py38']
include = '\.pyi?$'
exclude = '''
  /(
      \.eggs
    | \.git
    | \.hg
    | \.mypy_cache
    | \.tox
    | \.venv
    | _build
    | buck-out
    | build
    | dist
    | migrations
  )/
'''

Conclusions

In this tutorial I described my go-to project setup when working with Django. That’s pretty much how I start all my projects nowadays.

Here is the final project structure for reference:

simple/
├── simple/
│   ├── .git/
│   ├── .gitignore
│   ├── .editorconfig
│   ├── manage.py
│   ├── pyproject.toml
│   ├── requirements/
│   │   ├── base.txt
│   │   ├── local.txt
│   │   ├── production.txt
│   │   └── tests.txt
│   ├── setup.cfg
│   └── simple/
│       ├── __init__.py
│       ├── apps/
│       │   ├── accounts/
│       │   │   ├── migrations/
│       │   │   │   └── __init__.py
│       │   │   ├── static/
│       │   │   │   └── accounts/
│       │   │   ├── templates/
│       │   │   │   └── accounts/
│       │   │   ├── tests/
│       │   │   │   ├── __init__.py
│       │   │   │   └── factories.py
│       │   │   ├── __init__.py
│       │   │   ├── admin.py
│       │   │   ├── apps.py
│       │   │   ├── constants.py
│       │   │   ├── models.py
│       │   │   └── views.py
│       │   ├── core/
│       │   │   ├── migrations/
│       │   │   │   └── __init__.py
│       │   │   ├── static/
│       │   │   │   └── core/
│       │   │   ├── templates/
│       │   │   │   └── core/
│       │   │   ├── tests/
│       │   │   │   ├── __init__.py
│       │   │   │   └── factories.py
│       │   │   ├── __init__.py
│       │   │   ├── admin.py
│       │   │   ├── apps.py
│       │   │   ├── constants.py
│       │   │   ├── models.py
│       │   │   └── views.py
│       │   └── __init__.py
│       ├── locale/
│       ├── settings/
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── local.py
│       │   ├── production.py
│       │   └── tests.py
│       ├── static/
│       ├── templates/
│       ├── asgi.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

You can also explore the code on GitHub: django-production-template.

04-03-2021

18:25

Zo installeer je Chrome OS op je (oude) computer [Laatste Artikelen - Webwereld]

Google timmert al jaren hard aan de weg met Chrome OS en brengt samen met verschillende computerfabrikanten Chrome-apparaten uit met dat besturingssysteem. Maar je hoeft niet per se een dedicated apparaat aan te schaffen, je kan het systeem ook zelf op je (oude) computer zetten en wij laten je zien hoe.

29-01-2021

12:47

How to Use Chart.js with Django [Simple is Better Than Complex]

Chart.js is a cool open source JavaScript library that helps you render HTML5 charts. It is responsive and counts with 8 different chart types.

In this tutorial we are going to explore a little bit of how to make Django talk with Chart.js and render some simple charts based on data extracted from our models.

Installation

For this tutorial all you are going to do is add the Chart.js lib to your HTML page:

<script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>

You can download it from Chart.js official website and use it locally, or you can use it from a CDN using the URL above.

Example Scenario

I’m going to use the same example I used for the tutorial How to Create Group By Queries With Django ORM which is a good complement to this tutorial because actually the tricky part of working with charts is to transform the data so it can fit in a bar chart / line chart / etc.

We are going to use the two models below, Country and City:

class Country(models.Model):
    name = models.CharField(max_length=30)

class City(models.Model):
    name = models.CharField(max_length=30)
    country = models.ForeignKey(Country, on_delete=models.CASCADE)
    population = models.PositiveIntegerField()

And the raw data stored in the database:

cities
id name country_id population
1Tokyo2836,923,000
2Shanghai1334,000,000
3Jakarta1930,000,000
4Seoul2125,514,000
5Guangzhou1325,000,000
6Beijing1324,900,000
7Karachi2224,300,000
8Shenzhen1323,300,000
9Delhi2521,753,486
10Mexico City2421,339,781
11Lagos921,000,000
12São Paulo120,935,204
13Mumbai2520,748,395
14New York City2020,092,883
15Osaka2819,342,000
16Wuhan1319,000,000
17Chengdu1318,100,000
18Dhaka417,151,925
19Chongqing1317,000,000
20Tianjin1315,400,000
21Kolkata2514,617,882
22Tehran1114,595,904
23Istanbul214,377,018
24London2614,031,830
25Hangzhou1313,400,000
26Los Angeles2013,262,220
27Buenos Aires813,074,000
28Xi'an1312,900,000
29Paris612,405,426
30Changzhou1312,400,000
31Shantou1312,000,000
32Rio de Janeiro111,973,505
33Manila1811,855,975
34Nanjing1311,700,000
35Rhine-Ruhr1611,470,000
36Jinan1311,000,000
37Bangalore2510,576,167
38Harbin1310,500,000
39Lima79,886,647
40Zhengzhou139,700,000
41Qingdao139,600,000
42Chicago209,554,598
43Nagoya289,107,000
44Chennai258,917,749
45Bangkok158,305,218
46Bogotá277,878,783
47Hyderabad257,749,334
48Shenyang137,700,000
49Wenzhou137,600,000
50Nanchang137,400,000
51Hong Kong137,298,600
52Taipei297,045,488
53Dallas–Fort Worth206,954,330
54Santiago146,683,852
55Luanda236,542,944
56Houston206,490,180
57Madrid176,378,297
58Ahmedabad256,352,254
59Toronto56,055,724
60Philadelphia206,051,170
61Washington, D.C.206,033,737
62Miami205,929,819
63Belo Horizonte15,767,414
64Atlanta205,614,323
65Singapore125,535,000
66Barcelona175,445,616
67Munich165,203,738
68Stuttgart165,200,000
69Ankara25,150,072
70Hamburg165,100,000
71Pune255,049,968
72Berlin165,005,216
73Guadalajara244,796,050
74Boston204,732,161
75Sydney105,000,500
76San Francisco204,594,060
77Surat254,585,367
78Phoenix204,489,109
79Monterrey244,477,614
80Inland Empire204,441,890
81Rome34,321,244
82Detroit204,296,611
83Milan34,267,946
84Melbourne104,650,000
countries
id name
1Brazil
2Turkey
3Italy
4Bangladesh
5Canada
6France
7Peru
8Argentina
9Nigeria
10Australia
11Iran
12Singapore
13China
14Chile
15Thailand
16Germany
17Spain
18Philippines
19Indonesia
20United States
21South Korea
22Pakistan
23Angola
24Mexico
25India
26United Kingdom
27Colombia
28Japan
29Taiwan

Example 1: Pie Chart

For the first example we are only going to retrieve the top 5 most populous cities and render it as a pie chart. In this strategy we are going to return the chart data as part of the view context and inject the results in the JavaScript code using the Django Template language.

views.py

from django.shortcuts import render
from mysite.core.models import City

def pie_chart(request):
    labels = []
    data = []

    queryset = City.objects.order_by('-population')[:5]
    for city in queryset:
        labels.append(city.name)
        data.append(city.population)

    return render(request, 'pie_chart.html', {
        'labels': labels,
        'data': data,
    })

Basically in the view above we are iterating through the City queryset and building a list of labels and a list of data. Here in this case the data is the population count saved in the City model.

For the urls.py just a simple routing:

urls.py

from django.urls import path
from mysite.core import views

urlpatterns = [
    path('pie-chart/', views.pie_chart, name='pie-chart'),
]

Now the template. I got a basic snippet from the Chart.js Pie Chart Documentation.

pie_chart.html

{% extends 'base.html' %}

{% block content %}
  <div id="container" style="width: 75%;">
    <canvas id="pie-chart"></canvas>
  </div>

  <script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>
  <script>

    var config = {
      type: 'pie',
      data: {
        datasets: [{
          data: {{ data|safe }},
          backgroundColor: [
            '#696969', '#808080', '#A9A9A9', '#C0C0C0', '#D3D3D3'
          ],
          label: 'Population'
        }],
        labels: {{ labels|safe }}
      },
      options: {
        responsive: true
      }
    };

    window.onload = function() {
      var ctx = document.getElementById('pie-chart').getContext('2d');
      window.myPie = new Chart(ctx, config);
    };

  </script>

{% endblock %}

In the example above the base.html template is not important but you can see it in the code example I shared in the end of this post.

This strategy is not ideal but works fine. The bad thing is that we are using the Django Template Language to interfere with the JavaScript logic. When we put {{ data|safe}} we are injecting a variable that came from the server directly in the JavaScript code.

The code above looks like this:

Pie Chart


Example 2: Bar Chart with Ajax

As the title says, we are now going to render a bar chart using an async call.

views.py

from django.shortcuts import render
from django.db.models import Sum
from django.http import JsonResponse
from mysite.core.models import City

def home(request):
    return render(request, 'home.html')

def population_chart(request):
    labels = []
    data = []

    queryset = City.objects.values('country__name').annotate(country_population=Sum('population')).order_by('-country_population')
    for entry in queryset:
        labels.append(entry['country__name'])
        data.append(entry['country_population'])
    
    return JsonResponse(data={
        'labels': labels,
        'data': data,
    })

So here we are using two views. The home view would be the main page where the chart would be loaded at. The other view population_chart would be the one with the sole responsibility to aggregate the data the return a JSON response with the labels and data.

If you are wondering about what this queryset is doing, it is grouping the cities by the country and aggregating the total population of each country. The result is going to be a list of country + total population. To learn more about this kind of query have a look on this post: How to Create Group By Queries With Django ORM

urls.py

from django.urls import path
from mysite.core import views

urlpatterns = [
    path('', views.home, name='home'),
    path('population-chart/', views.population_chart, name='population-chart'),
]

home.html

{% extends 'base.html' %}

{% block content %}

  <div id="container" style="width: 75%;">
    <canvas id="population-chart" data-url="{% url 'population-chart' %}"></canvas>
  </div>

  <script src="https://code.jquery.com/jquery-3.4.1.min.js"></script>
  <script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>
  <script>

    $(function () {

      var $populationChart = $("#population-chart");
      $.ajax({
        url: $populationChart.data("url"),
        success: function (data) {

          var ctx = $populationChart[0].getContext("2d");

          new Chart(ctx, {
            type: 'bar',
            data: {
              labels: data.labels,
              datasets: [{
                label: 'Population',
                backgroundColor: 'blue',
                data: data.data
              }]          
            },
            options: {
              responsive: true,
              legend: {
                position: 'top',
              },
              title: {
                display: true,
                text: 'Population Bar Chart'
              }
            }
          });

        }
      });

    });

  </script>

{% endblock %}

Now we have a better separation of concerns. Looking at the chart container:

<canvas id="population-chart" data-url="{% url 'population-chart' %}"></canvas>

We added a reference to the URL that holds the chart rendering logic. Later on we are using it to execute the Ajax call.

var $populationChart = $("#population-chart");
$.ajax({
  url: $populationChart.data("url"),
  success: function (data) {
    // ...
  }
});

Inside the success callback we then finally execute the Chart.js related code using the JsonResponse data.

Bar Chart


Conclusions

I hope this tutorial helped you to get started with working with charts using Chart.js. I published another tutorial on the same subject a while ago but using the Highcharts library. The approach is pretty much the same: How to Integrate Highcharts.js with Django.

If you want to grab the code I used in this tutorial you can find it here: github.com/sibtc/django-chartjs-example.

How to Save Extra Data to a Django REST Framework Serializer [Simple is Better Than Complex]

In this tutorial you are going to learn how to pass extra data to your serializer, before saving it to the database.

Introduction

When using regular Django forms, there is this common pattern where we save the form with commit=False and then pass some extra data to the instance before saving it to the database, like this:

form = InvoiceForm(request.POST)
if form.is_valid():
    invoice = form.save(commit=False)
    invoice.user = request.user
    invoice.save()

This is very useful because we can save the required information using only one database query and it also make it possible to handle not nullable columns that was not defined in the form.

To simulate this pattern using a Django REST Framework serializer you can do something like this:

serializer = InvoiceSerializer(data=request.data)
if serializer.is_valid():
    serializer.save(user=request.user)

You can also pass several parameters at once:

serializer = InvoiceSerializer(data=request.data)
if serializer.is_valid():
    serializer.save(user=request.user, date=timezone.now(), status='sent')

Example Using APIView

In this example I created an app named core.

models.py

from django.contrib.auth.models import User
from django.db import models

class Invoice(models.Model):
    SENT = 1
    PAID = 2
    VOID = 3
    STATUS_CHOICES = (
        (SENT, 'sent'),
        (PAID, 'paid'),
        (VOID, 'void'),
    )

    user = models.ForeignKey(User, on_delete=models.CASCADE, related_name='invoices')
    number = models.CharField(max_length=30)
    date = models.DateTimeField(auto_now_add=True)
    status = models.PositiveSmallIntegerField(choices=STATUS_CHOICES)
    amount = models.DecimalField(max_digits=10, decimal_places=2)

serializers.py

from rest_framework import serializers
from core.models import Invoice

class InvoiceSerializer(serializers.ModelSerializer):
    class Meta:
        model = Invoice
        fields = ('number', 'amount')

views.py

from rest_framework import status
from rest_framework.response import Response
from rest_framework.views import APIView
from core.models import Invoice
from core.serializers import InvoiceSerializer

class InvoiceAPIView(APIView):
    def post(self, request):
        serializer = InvoiceSerializer(data=request.data)
        serializer.is_valid(raise_exception=True)
        serializer.save(user=request.user, status=Invoice.SENT)
        return Response(status=status.HTTP_201_CREATED)

Example Using ViewSet

Very similar example, using the same models.py and serializers.py as in the previous example.

views.py

from rest_framework.viewsets import ModelViewSet
from core.models import Invoice
from core.serializers import InvoiceSerializer

class InvoiceViewSet(ModelViewSet):
    queryset = Invoice.objects.all()
    serializer_class = InvoiceSerializer

    def perform_create(self, serializer):
        serializer.save(user=self.request.user, status=Invoice.SENT)

How to Use Date Picker with Django [Simple is Better Than Complex]

In this tutorial we are going to explore three date/datetime pickers options that you can easily use in a Django project. We are going to explore how to do it manually first, then how to set up a custom widget and finally how to use a third-party Django app with support to datetime pickers.


Introduction

The implementation of a date picker is mostly done on the front-end.

The key part of the implementation is to assure Django will receive the date input value in the correct format, and also that Django will be able to reproduce the format when rendering a form with initial data.

We can also use custom widgets to provide a deeper integration between the front-end and back-end and also to promote better reuse throughout a project.

In the next sections we are going to explore following date pickers:

Tempus Dominus Bootstrap 4 Docs Source

Tempus Dominus Bootstrap 4

XDSoft DateTimePicker Docs Source

XDSoft DateTimePicker

Fengyuan Chen’s Datepicker Docs Source

Fengyuan Chen's Datepicker


Tempus Dominus Bootstrap 4

Docs Source

This is a great JavaScript library and it integrate well with Bootstrap 4. The downside is that it requires moment.js and sort of need Font-Awesome for the icons.

It only make sense to use this library with you are already using Bootstrap 4 + jQuery, otherwise the list of CSS and JS may look a little bit overwhelming.

To install it you can use their CDN or download the latest release from their GitHub Releases page.

If you downloaded the code from the releases page, grab the processed code from the build/ folder.

Below, a static HTML example of the datepicker:

<!doctype html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
    <title>Static Example</title>

    <!-- Bootstrap 4 -->
    <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/css/bootstrap.min.css" integrity="sha384-GJzZqFGwb1QTTN6wy59ffF1BuGJpLSa9DkKMp0DgiMDm4iYMj70gZWKYbI706tWS" crossorigin="anonymous">
    <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.6/umd/popper.min.js" integrity="sha384-wHAiFfRlMFy6i5SRaxvfOCifBUQy1xHdJ/yoi7FRNXMRBu5WHdZYu1hA6ZOblgut" crossorigin="anonymous"></script>
    <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/js/bootstrap.min.js" integrity="sha384-B0UglyR+jN6CkvvICOB2joaf5I4l3gm9GU6Hc1og6Ls7i6U/mkkaduKaBhlAXv9k" crossorigin="anonymous"></script>

    <!-- Font Awesome -->
    <link href="https://stackpath.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css" rel="stylesheet" integrity="sha384-wvfXpqpZZVQGK6TAh5PVlGOfQNHSoD2xbE+QkPxCAFlNEevoEH3Sl0sibVcOQVnN" crossorigin="anonymous">

    <!-- Moment.js -->
    <script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.23.0/moment.min.js" integrity="sha256-VBLiveTKyUZMEzJd6z2mhfxIqz3ZATCuVMawPZGzIfA=" crossorigin="anonymous"></script>

    <!-- Tempus Dominus Bootstrap 4 -->
    <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/tempusdominus-bootstrap-4/5.1.2/css/tempusdominus-bootstrap-4.min.css" integrity="sha256-XPTBwC3SBoWHSmKasAk01c08M6sIA5gF5+sRxqak2Qs=" crossorigin="anonymous" />
    <script src="https://cdnjs.cloudflare.com/ajax/libs/tempusdominus-bootstrap-4/5.1.2/js/tempusdominus-bootstrap-4.min.js" integrity="sha256-z0oKYg6xiLq3yJGsp/LsY9XykbweQlHl42jHv2XTBz4=" crossorigin="anonymous"></script>

  </head>
  <body>

    <div class="input-group date" id="datetimepicker1" data-target-input="nearest">
      <input type="text" class="form-control datetimepicker-input" data-target="#datetimepicker1"/>
      <div class="input-group-append" data-target="#datetimepicker1" data-toggle="datetimepicker">
        <div class="input-group-text"><i class="fa fa-calendar"></i></div>
      </div>
    </div>

    <script>
      $(function () {
        $("#datetimepicker1").datetimepicker();
      });
    </script>

  </body>
</html>
Direct Usage

The challenge now is to have this input snippet integrated with a Django form.

forms.py

from django import forms

class DateForm(forms.Form):
    date = forms.DateTimeField(
        input_formats=['%d/%m/%Y %H:%M'],
        widget=forms.DateTimeInput(attrs={
            'class': 'form-control datetimepicker-input',
            'data-target': '#datetimepicker1'
        })
    )

template

<div class="input-group date" id="datetimepicker1" data-target-input="nearest">
  {{ form.date }}
  <div class="input-group-append" data-target="#datetimepicker1" data-toggle="datetimepicker">
    <div class="input-group-text"><i class="fa fa-calendar"></i></div>
  </div>
</div>

<script>
  $(function () {
    $("#datetimepicker1").datetimepicker({
      format: 'DD/MM/YYYY HH:mm',
    });
  });
</script>

The script tag can be placed anywhere because the snippet $(function () { ... }); will run the datetimepicker initialization when the page is ready. The only requirement is that this script tag is placed after the jQuery script tag.

Custom Widget

You can create the widget in any app you want, here I’m going to consider we have a Django app named core.

core/widgets.py

from django.forms import DateTimeInput

class BootstrapDateTimePickerInput(DateTimeInput):
    template_name = 'widgets/bootstrap_datetimepicker.html'

    def get_context(self, name, value, attrs):
        datetimepicker_id = 'datetimepicker_{name}'.format(name=name)
        if attrs is None:
            attrs = dict()
        attrs['data-target'] = '#{id}'.format(id=datetimepicker_id)
        attrs['class'] = 'form-control datetimepicker-input'
        context = super().get_context(name, value, attrs)
        context['widget']['datetimepicker_id'] = datetimepicker_id
        return context

In the implementation above we generate a unique ID datetimepicker_id and also include it in the widget context.

Then the front-end implementation is done inside the widget HTML snippet.

widgets/bootstrap_datetimepicker.html

<div class="input-group date" id="{{ widget.datetimepicker_id }}" data-target-input="nearest">
  {% include "django/forms/widgets/input.html" %}
  <div class="input-group-append" data-target="#{{ widget.datetimepicker_id }}" data-toggle="datetimepicker">
    <div class="input-group-text"><i class="fa fa-calendar"></i></div>
  </div>
</div>

<script>
  $(function () {
    $("#{{ widget.datetimepicker_id }}").datetimepicker({
      format: 'DD/MM/YYYY HH:mm',
    });
  });
</script>

Note how we make use of the built-in django/forms/widgets/input.html template.

Now the usage:

core/forms.py

from .widgets import BootstrapDateTimePickerInput

class DateForm(forms.Form):
    date = forms.DateTimeField(
        input_formats=['%d/%m/%Y %H:%M'], 
        widget=BootstrapDateTimePickerInput()
    )

Now simply render the field:

template

{{ form.date }}

The good thing about having the widget is that your form could have several date fields using the widget and you could simply render the whole form like:

<form method="post">
  {% csrf_token %}
  {{ form.as_p }}
  <input type="submit" value="Submit">
</form>

XDSoft DateTimePicker

Docs Source

The XDSoft DateTimePicker is a very versatile date picker and doesn’t rely on moment.js or Bootstrap, although it looks good in a Bootstrap website.

It is easy to use and it is very straightforward.

You can download the source from GitHub releases page.

Below, a static example so you can see the minimum requirements and how all the pieces come together:

<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
  <title>Static Example</title>

  <!-- jQuery -->
  <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>

  <!-- XDSoft DateTimePicker -->
  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/jquery-datetimepicker/2.5.20/jquery.datetimepicker.min.css" integrity="sha256-DOS9W6NR+NFe1fUhEE0PGKY/fubbUCnOfTje2JMDw3Y=" crossorigin="anonymous" />
  <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery-datetimepicker/2.5.20/jquery.datetimepicker.full.min.js" integrity="sha256-FEqEelWI3WouFOo2VWP/uJfs1y8KJ++FLh2Lbqc8SJk=" crossorigin="anonymous"></script>
</head>
<body>

  <input id="datetimepicker" type="text">

  <script>
    $(function () {
      $("#datetimepicker").datetimepicker();
    });
  </script>

</body>
</html>
Direct Usage

A basic integration with Django would look like this:

forms.py

from django import forms

class DateForm(forms.Form):
    date = forms.DateTimeField(input_formats=['%d/%m/%Y %H:%M'])

Simple form, default widget, nothing special.

Now using it on the template:

template

{{ form.date }}

<script>
  $(function () {
    $("#id_date").datetimepicker({
      format: 'd/m/Y H:i',
    });
  });
</script>

The id_date is the default ID Django generates for the form fields (id_ + name).

Custom Widget

core/widgets.py

from django.forms import DateTimeInput

class XDSoftDateTimePickerInput(DateTimeInput):
    template_name = 'widgets/xdsoft_datetimepicker.html'

widgets/xdsoft_datetimepicker.html

{% include "django/forms/widgets/input.html" %}

<script>
  $(function () {
    $("input[name='{{ widget.name }}']").datetimepicker({
      format: 'd/m/Y H:i',
    });
  });
</script>

To have a more generic implementation, this time we are selecting the field to initialize the component using its name instead of its id, should the user change the id prefix.

Now the usage:

core/forms.py

from django import forms
from .widgets import XDSoftDateTimePickerInput

class DateForm(forms.Form):
    date = forms.DateTimeField(
        input_formats=['%d/%m/%Y %H:%M'], 
        widget=XDSoftDateTimePickerInput()
    )

template

{{ form.date }}

Fengyuan Chen’s Datepicker

Docs Source

This is a very beautiful and minimalist date picker. Unfortunately there is no time support. But if you only need dates this is a great choice.

To install this datepicker you can either use their CDN or download the sources from their GitHub releases page. Please note that they do not provide a compiled/processed JavaScript files. But you can download those to your local machine using the CDN.

<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
  <title>Static Example</title>
  <style>body {font-family: Arial, sans-serif;}</style>
  
  <!-- jQuery -->
  <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>

  <!-- Fengyuan Chen's Datepicker -->
  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/datepicker/0.6.5/datepicker.min.css" integrity="sha256-b88RdwbRJEzRx95nCuuva+hO5ExvXXnpX+78h8DjyOE=" crossorigin="anonymous" />
  <script src="https://cdnjs.cloudflare.com/ajax/libs/datepicker/0.6.5/datepicker.min.js" integrity="sha256-/7FLTdzP6CfC1VBAj/rsp3Rinuuu9leMRGd354hvk0k=" crossorigin="anonymous"></script>
</head>
<body>

  <input id="datepicker">

  <script>
    $(function () {
      $("#datepicker").datepicker();
    });
  </script>

</body>
</html>
Direct Usage

A basic integration with Django (note that we are now using DateField instead of DateTimeField):

forms.py

from django import forms

class DateForm(forms.Form):
    date = forms.DateTimeField(input_formats=['%d/%m/%Y %H:%M'])

template

{{ form.date }}

<script>
  $(function () {
    $("#id_date").datepicker({
      format:'dd/mm/yyyy',
    });
  });
</script>
Custom Widget

core/widgets.py

from django.forms import DateInput

class FengyuanChenDatePickerInput(DateInput):
    template_name = 'widgets/fengyuanchen_datepicker.html'

widgets/fengyuanchen_datepicker.html

{% include "django/forms/widgets/input.html" %}

<script>
  $(function () {
    $("input[name='{{ widget.name }}']").datepicker({
      format:'dd/mm/yyyy',
    });
  });
</script>

Usage:

core/forms.py

from django import forms
from .widgets import FengyuanChenDatePickerInput

class DateForm(forms.Form):
    date = forms.DateTimeField(
        input_formats=['%d/%m/%Y %H:%M'], 
        widget=FengyuanChenDatePickerInput()
    )

template

{{ form.date }}

Conclusions

The implementation is very similar no matter what date/datetime picker you are using. Hopefully this tutorial provided some insights on how to integrate this kind of frontend library to a Django project.

As always, the best source of information about each of those libraries are their official documentation.

I also created an example project to show the usage and implementation of the widgets for each of the libraries presented in this tutorial. Grab the source code at github.com/sibtc/django-datetimepicker-example.

How to Implement Grouped Model Choice Field [Simple is Better Than Complex]

The Django forms API have two field types to work with multiple options: ChoiceField and ModelChoiceField.

Both use select input as the default widget and they work in a similar way, except that ModelChoiceField is designed to handle QuerySets and work with foreign key relationships.

A basic implementation using a ChoiceField would be:

class ExpenseForm(forms.Form):
    CHOICES = (
        (11, 'Credit Card'),
        (12, 'Student Loans'),
        (13, 'Taxes'),
        (21, 'Books'),
        (22, 'Games'),
        (31, 'Groceries'),
        (32, 'Restaurants'),
    )
    amount = forms.DecimalField()
    date = forms.DateField()
    category = forms.ChoiceField(choices=CHOICES)
Django ChoiceField

Grouped Choice Field

You can also organize the choices in groups to generate the <optgroup> tags like this:

class ExpenseForm(forms.Form):
    CHOICES = (
        ('Debt', (
            (11, 'Credit Card'),
            (12, 'Student Loans'),
            (13, 'Taxes'),
        )),
        ('Entertainment', (
            (21, 'Books'),
            (22, 'Games'),
        )),
        ('Everyday', (
            (31, 'Groceries'),
            (32, 'Restaurants'),
        )),
    )
    amount = forms.DecimalField()
    date = forms.DateField()
    category = forms.ChoiceField(choices=CHOICES)
Django Grouped ChoiceField

Grouped Model Choice Field

When you are using a ModelChoiceField unfortunately there is no built-in solution.

Recently I found a nice solution on Django’s ticket tracker, where someone proposed adding an opt_group argument to the ModelChoiceField.

While the discussion is still ongoing, Simon Charette proposed a really good solution.

Let’s see how we can integrate it in our project.

First consider the following models:

models.py

from django.db import models

class Category(models.Model):
    name = models.CharField(max_length=30)
    parent = models.ForeignKey('Category', on_delete=models.CASCADE, null=True)

    def __str__(self):
        return self.name

class Expense(models.Model):
    amount = models.DecimalField(max_digits=10, decimal_places=2)
    date = models.DateField()
    category = models.ForeignKey(Category, on_delete=models.CASCADE)

    def __str__(self):
        return self.amount

So now our category instead of being a regular choices field it is now a model and the Expense model have a relationship with it using a foreign key.

If we create a ModelForm using this model, the result will be very similar to our first example.

To simulate a grouped categories you will need the code below. First create a new module named fields.py:

fields.py

from functools import partial
from itertools import groupby
from operator import attrgetter

from django.forms.models import ModelChoiceIterator, ModelChoiceField


class GroupedModelChoiceIterator(ModelChoiceIterator):
    def __init__(self, field, groupby):
        self.groupby = groupby
        super().__init__(field)

    def __iter__(self):
        if self.field.empty_label is not None:
            yield ("", self.field.empty_label)
        queryset = self.queryset
        # Can't use iterator() when queryset uses prefetch_related()
        if not queryset._prefetch_related_lookups:
            queryset = queryset.iterator()
        for group, objs in groupby(queryset, self.groupby):
            yield (group, [self.choice(obj) for obj in objs])


class GroupedModelChoiceField(ModelChoiceField):
    def __init__(self, *args, choices_groupby, **kwargs):
        if isinstance(choices_groupby, str):
            choices_groupby = attrgetter(choices_groupby)
        elif not callable(choices_groupby):
            raise TypeError('choices_groupby must either be a str or a callable accepting a single argument')
        self.iterator = partial(GroupedModelChoiceIterator, groupby=choices_groupby)
        super().__init__(*args, **kwargs)

And here is how you use it in your forms:

forms.py

from django import forms
from .fields import GroupedModelChoiceField
from .models import Category, Expense

class ExpenseForm(forms.ModelForm):
    category = GroupedModelChoiceField(
        queryset=Category.objects.exclude(parent=None), 
        choices_groupby='parent'
    )

    class Meta:
        model = Expense
        fields = ('amount', 'date', 'category')
Django Grouped ModelChoiceField

Because in the example above I used a self-referencing relationship I had to add the exclude(parent=None) to hide the “group categories” from showing up in the select input as a valid option.


Further Reading

You can download the code used in this tutorial from GitHub: github.com/sibtc/django-grouped-choice-field-example

Credits to the solution Simon Charette on Django Ticket Track.

How to Use JWT Authentication with Django REST Framework [Simple is Better Than Complex]

JWT stand for JSON Web Token and it is an authentication strategy used by client/server applications where the client is a Web application using JavaScript and some frontend framework like Angular, React or VueJS.

In this tutorial we are going to explore the specifics of JWT authentication. If you want to learn more about Token-based authentication using Django REST Framework (DRF), or if you want to know how to start a new DRF project you can read this tutorial: How to Implement Token Authentication using Django REST Framework. The concepts are the same, we are just going to switch the authentication backend.


How JWT Works?

The JWT is just an authorization token that should be included in all requests:

curl http://127.0.0.1:8000/hello/ -H 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQzODI4NDMxLCJqdGkiOiI3ZjU5OTdiNzE1MGQ0NjU3OWRjMmI0OTE2NzA5N2U3YiIsInVzZXJfaWQiOjF9.Ju70kdcaHKn1Qaz8H42zrOYk0Jx9kIckTn9Xx7vhikY'

The JWT is acquired by exchanging an username + password for an access token and an refresh token.

The access token is usually short-lived (expires in 5 min or so, can be customized though).

The refresh token lives a little bit longer (expires in 24 hours, also customizable). It is comparable to an authentication session. After it expires, you need a full login with username + password again.

Why is that?

It’s a security feature and also it’s because the JWT holds a little bit more information. If you look closely the example I gave above, you will see the token is composed by three parts:

xxxxx.yyyyy.zzzzz

Those are three distinctive parts that compose a JWT:

header.payload.signature

So we have here:

header = eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9
payload = eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQzODI4NDMxLCJqdGkiOiI3ZjU5OTdiNzE1MGQ0NjU3OWRjMmI0OTE2NzA5N2U3YiIsInVzZXJfaWQiOjF9
signature = Ju70kdcaHKn1Qaz8H42zrOYk0Jx9kIckTn9Xx7vhikY

This information is encoded using Base64. If we decode, we will see something like this:

header

{
  "typ": "JWT",
  "alg": "HS256"
}

payload

{
  "token_type": "access",
  "exp": 1543828431,
  "jti": "7f5997b7150d46579dc2b49167097e7b",
  "user_id": 1
}

signature

The signature is issued by the JWT backend, using the header base64 + payload base64 + SECRET_KEY. Upon each request this signature is verified. If any information in the header or in the payload was changed by the client it will invalidate the signature. The only way of checking and validating the signature is by using your application’s SECRET_KEY. Among other things, that’s why you should always keep your SECRET_KEY secret!


Installation & Setup

For this tutorial we are going to use the djangorestframework_simplejwt library, recommended by the DRF developers.

pip install djangorestframework_simplejwt

settings.py

REST_FRAMEWORK = {
    'DEFAULT_AUTHENTICATION_CLASSES': [
        'rest_framework_simplejwt.authentication.JWTAuthentication',
    ],
}

urls.py

from django.urls import path
from rest_framework_simplejwt import views as jwt_views

urlpatterns = [
    # Your URLs...
    path('api/token/', jwt_views.TokenObtainPairView.as_view(), name='token_obtain_pair'),
    path('api/token/refresh/', jwt_views.TokenRefreshView.as_view(), name='token_refresh'),
]

Example Code

For this tutorial I will use the following route and API view:

views.py

from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated


class HelloView(APIView):
    permission_classes = (IsAuthenticated,)

    def get(self, request):
        content = {'message': 'Hello, World!'}
        return Response(content)

urls.py

from django.urls import path
from myapi.core import views

urlpatterns = [
    path('hello/', views.HelloView.as_view(), name='hello'),
]

Usage

I will be using HTTPie to consume the API endpoints via the terminal. But you can also use cURL (readily available in many OS) to try things out locally.

Or alternatively, use the DRF web interface by accessing the endpoint URLs like this:

DRF JWT Obtain Token

Obtain Token

First step is to authenticate and obtain the token. The endpoint is /api/token/ and it only accepts POST requests.

http post http://127.0.0.1:8000/api/token/ username=vitor password=123

HTTPie JWT Obtain Token

So basically your response body is the two tokens:

{
    "access": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQ1MjI0MjU5LCJqdGkiOiIyYmQ1NjI3MmIzYjI0YjNmOGI1MjJlNThjMzdjMTdlMSIsInVzZXJfaWQiOjF9.D92tTuVi_YcNkJtiLGHtcn6tBcxLCBxz9FKD3qzhUg8",
    "refresh": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoicmVmcmVzaCIsImV4cCI6MTU0NTMxMDM1OSwianRpIjoiMjk2ZDc1ZDA3Nzc2NDE0ZjkxYjhiOTY4MzI4NGRmOTUiLCJ1c2VyX2lkIjoxfQ.rA-mnGRg71NEW_ga0sJoaMODS5ABjE5HnxJDb0F8xAo"
}

After that you are going to store both the access token and the refresh token on the client side, usually in the localStorage.

In order to access the protected views on the backend (i.e., the API endpoints that require authentication), you should include the access token in the header of all requests, like this:

http http://127.0.0.1:8000/hello/ "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQ1MjI0MjAwLCJqdGkiOiJlMGQxZDY2MjE5ODc0ZTY3OWY0NjM0ZWU2NTQ2YTIwMCIsInVzZXJfaWQiOjF9.9eHat3CvRQYnb5EdcgYFzUyMobXzxlAVh_IAgqyvzCE"

HTTPie JWT Hello, World!

You can use this access token for the next five minutes.

After five min, the token will expire, and if you try to access the view again, you are going to get the following error:

http http://127.0.0.1:8000/hello/ "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQ1MjI0MjAwLCJqdGkiOiJlMGQxZDY2MjE5ODc0ZTY3OWY0NjM0ZWU2NTQ2YTIwMCIsInVzZXJfaWQiOjF9.9eHat3CvRQYnb5EdcgYFzUyMobXzxlAVh_IAgqyvzCE"

HTTPie JWT Expired

Refresh Token

To get a new access token, you should use the refresh token endpoint /api/token/refresh/ posting the refresh token:

http post http://127.0.0.1:8000/api/token/refresh/ refresh=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoicmVmcmVzaCIsImV4cCI6MTU0NTMwODIyMiwianRpIjoiNzAyOGFlNjc0ZTdjNDZlMDlmMzUwYjg3MjU1NGUxODQiLCJ1c2VyX2lkIjoxfQ.Md8AO3dDrQBvWYWeZsd_A1J39z6b6HEwWIUZ7ilOiPE

HTTPie JWT Refresh Token

The return is a new access token that you should use in the subsequent requests.

The refresh token is valid for the next 24 hours. When it finally expires too, the user will need to perform a full authentication again using their username and password to get a new set of access token + refresh token.


What’s The Point of The Refresh Token?

At first glance the refresh token may look pointless, but in fact it is necessary to make sure the user still have the correct permissions. If your access token have a long expire time, it may take longer to update the information associated with the token. That’s because the authentication check is done by cryptographic means, instead of querying the database and verifying the data. So some information is sort of cached.

There is also a security aspect, in a sense that the refresh token only travel in the POST data. And the access token is sent via HTTP header, which may be logged along the way. So this also give a short window, should your access token be compromised.


Further Reading

This should cover the basics on the backend implementation. It’s worth checking the djangorestframework_simplejwt settings for further customization and to get a better idea of what the library offers.

The implementation on the frontend depends on what framework/library you are using. Some libraries and articles covering popular frontend frameworks like angular/react/vue.js:

The code used in this tutorial is available at github.com/sibtc/drf-jwt-example.

Advanced Form Rendering with Django Crispy Forms [Simple is Better Than Complex]

[Django 2.1.3 / Python 3.6.5 / Bootstrap 4.1.3]

In this tutorial we are going to explore some of the Django Crispy Forms features to handle advanced/custom forms rendering. This blog post started as a discussion in our community forum, so I decided to compile the insights and solutions in a blog post to benefit a wider audience.

Table of Contents


Introduction

Throughout this tutorial we are going to implement the following Bootstrap 4 form using Django APIs:

Bootstrap 4 Form

This was taken from Bootstrap 4 official documentation as an example of how to use form rows.

NOTE!

The examples below refer to a base.html template. Consider the code below:

base.html

<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
  <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">
</head>
<body>
  <div class="container">
    {% block content %}
    {% endblock %}
  </div>
</body>
</html>

Installation

Install it using pip:

pip install django-crispy-forms

Add it to your INSTALLED_APPS and select which styles to use:

settings.py

INSTALLED_APPS = [
    ...

    'crispy_forms',
]

CRISPY_TEMPLATE_PACK = 'bootstrap4'

For detailed instructions about how to install django-crispy-forms, please refer to this tutorial: How to Use Bootstrap 4 Forms With Django


Basic Form Rendering

The Python code required to represent the form above is the following:

from django import forms

STATES = (
    ('', 'Choose...'),
    ('MG', 'Minas Gerais'),
    ('SP', 'Sao Paulo'),
    ('RJ', 'Rio de Janeiro')
)

class AddressForm(forms.Form):
    email = forms.CharField(widget=forms.TextInput(attrs={'placeholder': 'Email'}))
    password = forms.CharField(widget=forms.PasswordInput())
    address_1 = forms.CharField(
        label='Address',
        widget=forms.TextInput(attrs={'placeholder': '1234 Main St'})
    )
    address_2 = forms.CharField(
        widget=forms.TextInput(attrs={'placeholder': 'Apartment, studio, or floor'})
    )
    city = forms.CharField()
    state = forms.ChoiceField(choices=STATES)
    zip_code = forms.CharField(label='Zip')
    check_me_out = forms.BooleanField(required=False)

In this case I’m using a regular Form, but it could also be a ModelForm based on a Django model with similar fields. The state field and the STATES choices could be either a foreign key or anything else. Here I’m just using a simple static example with three Brazilian states.

Template:

{% extends 'base.html' %}

{% block content %}
  <form method="post">
    {% csrf_token %}
    <table>{{ form.as_table }}</table>
    <button type="submit">Sign in</button>
  </form>
{% endblock %}

Rendered HTML:

Simple Django Form

Rendered HTML with validation state:

Simple Django Form Validation State


Basic Crispy Form Rendering

Same form code as in the example before.

Template:

{% extends 'base.html' %}

{% load crispy_forms_tags %}

{% block content %}
  <form method="post">
    {% csrf_token %}
    {{ form|crispy }}
    <button type="submit" class="btn btn-primary">Sign in</button>
  </form>
{% endblock %}

Rendered HTML:

Crispy Django Form

Rendered HTML with validation state:

Crispy Django Form Validation State


Custom Fields Placement with Crispy Forms

Same form code as in the first example.

Template:

{% extends 'base.html' %}

{% load crispy_forms_tags %}

{% block content %}
  <form method="post">
    {% csrf_token %}
    <div class="form-row">
      <div class="form-group col-md-6 mb-0">
        {{ form.email|as_crispy_field }}
      </div>
      <div class="form-group col-md-6 mb-0">
        {{ form.password|as_crispy_field }}
      </div>
    </div>
    {{ form.address_1|as_crispy_field }}
    {{ form.address_2|as_crispy_field }}
    <div class="form-row">
      <div class="form-group col-md-6 mb-0">
        {{ form.city|as_crispy_field }}
      </div>
      <div class="form-group col-md-4 mb-0">
        {{ form.state|as_crispy_field }}
      </div>
      <div class="form-group col-md-2 mb-0">
        {{ form.zip_code|as_crispy_field }}
      </div>
    </div>
    {{ form.check_me_out|as_crispy_field }}
    <button type="submit" class="btn btn-primary">Sign in</button>
  </form>
{% endblock %}

Rendered HTML:

Custom Crispy Django Form

Rendered HTML with validation state:

Custom Crispy Django Form Validation State


Crispy Forms Layout Helpers

We could use the crispy forms layout helpers to achieve the same result as above. The implementation is done inside the form __init__ method:

forms.py

from django import forms
from crispy_forms.helper import FormHelper
from crispy_forms.layout import Layout, Submit, Row, Column

STATES = (
    ('', 'Choose...'),
    ('MG', 'Minas Gerais'),
    ('SP', 'Sao Paulo'),
    ('RJ', 'Rio de Janeiro')
)

class AddressForm(forms.Form):
    email = forms.CharField(widget=forms.TextInput(attrs={'placeholder': 'Email'}))
    password = forms.CharField(widget=forms.PasswordInput())
    address_1 = forms.CharField(
        label='Address',
        widget=forms.TextInput(attrs={'placeholder': '1234 Main St'})
    )
    address_2 = forms.CharField(
        widget=forms.TextInput(attrs={'placeholder': 'Apartment, studio, or floor'})
    )
    city = forms.CharField()
    state = forms.ChoiceField(choices=STATES)
    zip_code = forms.CharField(label='Zip')
    check_me_out = forms.BooleanField(required=False)

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.helper = FormHelper()
        self.helper.layout = Layout(
            Row(
                Column('email', css_class='form-group col-md-6 mb-0'),
                Column('password', css_class='form-group col-md-6 mb-0'),
                css_class='form-row'
            ),
            'address_1',
            'address_2',
            Row(
                Column('city', css_class='form-group col-md-6 mb-0'),
                Column('state', css_class='form-group col-md-4 mb-0'),
                Column('zip_code', css_class='form-group col-md-2 mb-0'),
                css_class='form-row'
            ),
            'check_me_out',
            Submit('submit', 'Sign in')
        )

The template implementation is very minimal:

{% extends 'base.html' %}

{% load crispy_forms_tags %}

{% block content %}
  {% crispy form %}
{% endblock %}

The end result is the same.

Rendered HTML:

Custom Crispy Django Form

Rendered HTML with validation state:

Custom Crispy Django Form Validation State


Custom Crispy Field

You may also customize the field template and easily reuse throughout your application. Let’s say we want to use the custom Bootstrap 4 checkbox:

Bootstrap 4 Custom Checkbox

From the official documentation, the necessary HTML to output the input above:

<div class="custom-control custom-checkbox">
  <input type="checkbox" class="custom-control-input" id="customCheck1">
  <label class="custom-control-label" for="customCheck1">Check this custom checkbox</label>
</div>

Using the crispy forms API, we can create a new template for this custom field in our “templates” folder:

custom_checkbox.html

{% load crispy_forms_field %}

<div class="form-group">
  <div class="custom-control custom-checkbox">
    {% crispy_field field 'class' 'custom-control-input' %}
    <label class="custom-control-label" for="{{ field.id_for_label }}">{{ field.label }}</label>
  </div>
</div>

Now we can create a new crispy field, either in our forms.py module or in a new Python module named fields.py or something.

forms.py

from crispy_forms.layout import Field

class CustomCheckbox(Field):
    template = 'custom_checkbox.html'

We can use it now in our form definition:

forms.py

class CustomFieldForm(AddressForm):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.helper = FormHelper()
        self.helper.layout = Layout(
            Row(
                Column('email', css_class='form-group col-md-6 mb-0'),
                Column('password', css_class='form-group col-md-6 mb-0'),
                css_class='form-row'
            ),
            'address_1',
            'address_2',
            Row(
                Column('city', css_class='form-group col-md-6 mb-0'),
                Column('state', css_class='form-group col-md-4 mb-0'),
                Column('zip_code', css_class='form-group col-md-2 mb-0'),
                css_class='form-row'
            ),
            CustomCheckbox('check_me_out'),  # <-- Here
            Submit('submit', 'Sign in')
        )

(PS: the AddressForm was defined here and is the same as in the previous example.)

The end result:

Bootstrap 4 Custom Checkbox


Conclusions

There is much more Django Crispy Forms can do. Hopefully this tutorial gave you some extra insights on how to use the form helpers and layout classes. As always, the official documentation is the best source of information:

Django Crispy Forms layouts docs

Also, the code used in this tutorial is available on GitHub at github.com/sibtc/advanced-crispy-forms-examples.

How to Implement Token Authentication using Django REST Framework [Simple is Better Than Complex]

In this tutorial you are going to learn how to implement Token-based authentication using Django REST Framework (DRF). The token authentication works by exchanging username and password for a token that will be used in all subsequent requests so to identify the user on the server side.

The specifics of how the authentication is handled on the client side vary a lot depending on the technology/language/framework you are working with. The client could be a mobile application using iOS or Android. It could be a desktop application using Python or C++. It could be a Web application using PHP or Ruby.

But once you understand the overall process, it’s easier to find the necessary resources and documentation for your specific use case.

Token authentication is suitable for client-server applications, where the token is safely stored. You should never expose your token, as it would be (sort of) equivalent of a handing out your username and password.

Table of Contents


Setting Up The REST API Project

So let’s start from the very beginning. Install Django and DRF:

pip install django
pip install djangorestframework

Create a new Django project:

django-admin.py startproject myapi .

Navigate to the myapi folder:

cd myapi

Start a new app. I will call my app core:

django-admin.py startapp core

Here is what your project structure should look like:

myapi/
 |-- core/
 |    |-- migrations/
 |    |-- __init__.py
 |    |-- admin.py
 |    |-- apps.py
 |    |-- models.py
 |    |-- tests.py
 |    +-- views.py
 |-- __init__.py
 |-- settings.py
 |-- urls.py
 +-- wsgi.py
manage.py

Add the core app (you created) and the rest_framework app (you installed) to the INSTALLED_APPS, inside the settings.py module:

myapi/settings.py

INSTALLED_APPS = [
    # Django Apps
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',

    # Third-Party Apps
    'rest_framework',

    # Local Apps (Your project's apps)
    'myapi.core',
]

Return to the project root (the folder where the manage.py script is), and migrate the database:

python manage.py migrate

Let’s create our first API view just to test things out:

myapi/core/views.py

from rest_framework.views import APIView
from rest_framework.response import Response

class HelloView(APIView):
    def get(self, request):
        content = {'message': 'Hello, World!'}
        return Response(content)

Now register a path in the urls.py module:

myapi/urls.py

from django.urls import path
from myapi.core import views

urlpatterns = [
    path('hello/', views.HelloView.as_view(), name='hello'),
]

So now we have an API with just one endpoint /hello/ that we can perform GET requests. We can use the browser to consume this endpoint, just by accessing the URL http://127.0.0.1:8000/hello/:

Hello Endpoint HTML

We can also ask to receive the response as plain JSON data by passing the format parameter in the querystring like http://127.0.0.1:8000/hello/?format=json:

Hello Endpoint JSON

Both methods are fine to try out a DRF API, but sometimes a command line tool is more handy as we can play more easily with the requests headers. You can use cURL, which is widely available on all major Linux/macOS distributions:

curl http://127.0.0.1:8000/hello/

Hello Endpoint cURL

But usually I prefer to use HTTPie, which is a pretty awesome Python command line tool:

http http://127.0.0.1:8000/hello/

Hello Endpoint HTTPie

Now let’s protect this API endpoint so we can implement the token authentication:

myapi/core/views.py

from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated  # <-- Here


class HelloView(APIView):
    permission_classes = (IsAuthenticated,)             # <-- And here

    def get(self, request):
        content = {'message': 'Hello, World!'}
        return Response(content)

Try again to access the API endpoint:

http http://127.0.0.1:8000/hello/

Hello Endpoint HTTPie Forbidden

And now we get an HTTP 403 Forbidden error. Now let’s implement the token authentication so we can access this endpoint.


Implementing the Token Authentication

We need to add two pieces of information in our settings.py module. First include rest_framework.authtoken to your INSTALLED_APPS and include the TokenAuthentication to REST_FRAMEWORK:

myapi/settings.py

INSTALLED_APPS = [
    # Django Apps
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',

    # Third-Party Apps
    'rest_framework',
    'rest_framework.authtoken',  # <-- Here

    # Local Apps (Your project's apps)
    'myapi.core',
]

REST_FRAMEWORK = {
    'DEFAULT_AUTHENTICATION_CLASSES': [
        'rest_framework.authentication.TokenAuthentication',  # <-- And here
    ],
}

Migrate the database to create the table that will store the authentication tokens:

python manage.py migrate

Migrate Auth Token

Now we need a user account. Let’s just create one using the manage.py command line utility:

python manage.py createsuperuser --username vitor --email vitor@example.com

The easiest way to generate a token, just for testing purpose, is using the command line utility again:

python manage.py drf_create_token vitor

drf_create_token

This piece of information, the random string 9054f7aa9305e012b3c2300408c3dfdf390fcddf is what we are going to use next to authenticate.

But now that we have the TokenAuthentication in place, let’s try to make another request to our /hello/ endpoint:

http http://127.0.0.1:8000/hello/

WWW-Authenticate Token

Notice how our API is now providing some extra information to the client on the required authentication method.

So finally, let’s use our token!

http http://127.0.0.1:8000/hello/ 'Authorization: Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf'

REST Token Authentication

And that’s pretty much it. For now on, on all subsequent request you should include the header Authorization: Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf.

The formatting looks weird and usually it is a point of confusion on how to set this header. It will depend on the client and how to set the HTTP request header.

For example, if we were using cURL, the command would be something like this:

curl http://127.0.0.1:8000/hello/ -H 'Authorization: Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf'

Or if it was a Python requests call:

import requests

url = 'http://127.0.0.1:8000/hello/'
headers = {'Authorization': 'Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf'}
r = requests.get(url, headers=headers)

Or if we were using Angular, you could implement an HttpInterceptor and set a header:

import { Injectable } from '@angular/core';
import { HttpRequest, HttpHandler, HttpEvent, HttpInterceptor } from '@angular/common/http';
import { Observable } from 'rxjs';

@Injectable()
export class AuthInterceptor implements HttpInterceptor {
  intercept(request: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> {
    const user = JSON.parse(localStorage.getItem('user'));
    if (user && user.token) {
      request = request.clone({
        setHeaders: {
          Authorization: `Token ${user.accessToken}`
        }
      });
    }
    return next.handle(request);
  }
}

User Requesting a Token

The DRF provide an endpoint for the users to request an authentication token using their username and password.

Include the following route to the urls.py module:

myapi/urls.py

from django.urls import path
from rest_framework.authtoken.views import obtain_auth_token  # <-- Here
from myapi.core import views

urlpatterns = [
    path('hello/', views.HelloView.as_view(), name='hello'),
    path('api-token-auth/', obtain_auth_token, name='api_token_auth'),  # <-- And here
]

So now we have a brand new API endpoint, which is /api-token-auth/. Let’s first inspect it:

http http://127.0.0.1:8000/api-token-auth/

API Token Auth

It doesn’t handle GET requests. Basically it’s just a view to receive a POST request with username and password.

Let’s try again:

http post http://127.0.0.1:8000/api-token-auth/ username=vitor password=123

API Token Auth POST

The response body is the token associated with this particular user. After this point you store this token and apply it to the future requests.

Then, again, the way you are going to make the POST request to the API depends on the language/framework you are using.

If this was an Angular client, you could store the token in the localStorage, if this was a Desktop CLI application you could store in a text file in the user’s home directory in a dot file.


Conclusions

Hopefully this tutorial provided some insights on how the token authentication works. I will try to follow up this tutorial providing some concrete examples of Angular applications, command line applications and Web clients as well.

It is important to note that the default Token implementation has some limitations such as only one token per user, no built-in way to set an expiry date to the token.

You can grab the code used in this tutorial at github.com/sibtc/drf-token-auth-example.

13-11-2020

17:24

30 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

29 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

28 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

27 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

26 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden

25 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

24 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

23 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

22 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

21 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

20 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

19 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

18 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

17 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

16 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

15 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

14 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

13 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

12 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

11 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

Python GUI applicatie consistent backups met fsarchiver [linux blogs franz ulenaers]

Python GUI applicatie consistent backups maken met fsarchiver



Een partitie van het type = "Linux LVM" kan gebruikt worden voor logische volumen maar ook als "snapshot" !
Een snapshot kan een exact kopie zijn van een logische volume dat bevrozen is op een bepaald ogenblik : dit maakt het mogelijk om consistente backups te maken van logische volumen
terwijl de logische volumen in gebruik zijn !





Mijn fysische en logische volumen zijn als volgt aangemaakt :

    fysische volume

      pvcreate /dev/sda1

    fysische volume groep

      vgcreate mydell /dev/sda1

    logische volumen

      lvcreate -L 1G -n boot mydell

      lvcreate -L 100G -n data mydell

      lvcreate -L 50G -n home mydell

      lvcreate -L 50G -n root mydell

      lvcreate -L 1G swap mydell







beginscherm

procedures MyCloud [linux blogs franz ulenaers]

Procedures MyCloud

  • Procedure lftpUlefr01Cloudupload wordt gebruikt om een upload te doen van bestanden en mappen naar MyCloud

  • Procedure lftpUlefr01Cloudmirror wordt gebruikt om wijzigingen terug te halen 


Beide procedures maken gebruik van het programma lftp ( dit is "Sophisticated file transfer program" ) en worden gebruikt om synchronisatie van laptop en desktop toe te laten 


Procedures werden aangepast zodat verborgen bestanden en verborgen mappen ook worden verwerkt ,

alsook werden voor mirror bepaalde meestal onveranderde bestanden en mappen uitgefilterd (--exclude) zodanig dat deze niet opnieuw worden verwerkt

op Cloud blijven ze bestaan als backup maar op de verschillende laptops niet (dit werd gedaan voor oudere mails van 2016 maanden 2016-11 en 2016-12

en voor alle vorige maanden (dit tot en met september) van 2017 !

  • zie bijlagen


python GUI applicatie tune2fs [linux blogs franz ulenaers]

python GUI applicatie tune2fs comando

Created woensdag 18 oktober 2017

geschreven met programmeertaal python met gebruik van Gtk+ 3 

starten in terminal met : sudo python mytune2fs.py

ofwel python source compileren en starten met gecompileerde versie


zie bijlagen :
* pdf
* mytune2fs.py

Python GUI applicatie myarchive.py [linux blogs franz ulenaers]

python GUI applicatie backups maken met fsarchiver

Created vrijdag 13 oktober 2017

GUI applicatie backups maken, achiveerinfo en restore met fsarchiver

zie bijgeleverde bestand : python_GUI_applicatie_backups_maken_met_fsarchiver.pdf


start in terminal mode met : 

* sudo python myarchive.py

* sudo python myarchive2.py

ofwel door gecompileerde versie te maken en de gegeneerde objecten te starten


python myfsck.py [linux blogs franz ulenaers]

python GUI applicatie fsck commando

Created vrijdag 13 oktober 2017

zie bijgeleverd bestand myfsck.py

Deze applicatie kan devices mounten en umounten maar is hoofdzakelijk bedoeld om het fsck comando uit te voeren

Root rechten zijn nodig !

hulp ?

* starten in terminal mode 

* sudo python myfsck.py


Maken dat een bestand niet te wijzigen , niet te hernoemen is niet te deleten is in linux ! [linux blogs franz ulenaers]

Maken dat een bestand niet te wijzigen , niet te hernoemen is niet te deleten is in linux !


bestand .encfs6.xml


hoe : sudo chattr +i /data/Encrypt/.encfs6.xml

je kunt het bestand niet wijzigen, je kunt het bestand niet hernoemen, je kunt het bestand niet deleten zelfs als je root zijt

  • zet attribuut
  • status bekijken
    • lsattr .encfs6.xml
      • ----i--------e-- .encfs6.xml
        • de i betekent immutable
  • om immutable attribuut weg te doen
    • chattr -i .encfs6.xml



Backup laptop [linux blogs franz ulenaers]

laptop heeft een multiboot = windows 7 met encryptie en Linux Mint
backup van mijn laptop , zie http://users.telenet.be/franz.ulenaers/laptopca-new.html

Linken in Linux [linux blogs franz ulenaers]

Op Linux kan men bestanden meervoudige benamingen geven, zo kun je een bestand op verschillende plaatsen in de boomstructuur van de bestanden opslaan , zonder extra plaats op harde schijf in te nemen (+-).

Er zijn twee soorten links :

  1. harde links

  2. symbolische links

Een harde link maakt gebruik van hetzelfde bestandsnummer (inode).

Een harde link geldt niet voor een directory !

Een harde link moet op zelfde bestandssysteem en oorspronkelijk bestand moet bestaan !

Een symbolische link , het bestand krijgt een nieuw bestandsnummer , het bestand waarop verwezen wordt hoeft niet te bestaan.

Een symbolische link gaat ook voor een directory.

bash-shell gebruiker ulefr01

pwd
/home/ulefr01/cgcles/linux
ls linuxcursus.odt -ila
293800 -rw-r--r-- 1 ulefr01 ulefr01 4251348 2005-12-17 21:11 linuxcursus.odt

Het bestand linuxcursus is 4,2M groot, inode nr 293800.

bash-shell gebruiker tom

pwd
/home/tom
ln /home/ulefr01/cgcles/linux/linuxcursus.odt cursuslinux.odt
tom@franz3:~ $ ls cursuslinux.odt -il
293800 -rw-r--r-- 2 ulefr01 ulefr01 4251348 2005-12-17 21:11 cursuslinux.odt
geen extra plaats van 4,2M, zelfde inode nr 293800 !

bash-shell gebruiker root

pwd
/root
root@franz3:~ # ln /home/ulefr01/cgcles/linux/linuxcursus.odt linuxcursus.odt
root@franz3:~ # ls -il linux*
293800 -rw-rw-r-- 3 ulefr01 ulefr01 4251300 2005-12-17 21:31 linuxcursus.odt
geen extra plaats van 4,2M, zelfde inode nr 293800 !

bash-shell gebruiker ulefr01, symbolische link

ln -s cgcles/linux/linuxcursus.odt linuxcursus.odt
ulefr01@franz3:~ $ ls -il linuxcursus.odt
1191741 lrwxrwxrwx 1 ulefr01 ulefr01 28 2005-12-17 21:42 linuxcursus.odt -> cgcles/linux/linuxcursus.odt
slechts 28 bytes

ln -s linuxcursus.odt test.odt
1191898 lrwxrwxrwx 1 ulefr01 ulefr01 15 2005-12-17 22:00 test.odt -> linuxcursus.odt
slechts 15 bytes

rm linuxcursus.odt
ulefr01@franz3:~ $ ls *.odt -il
1193723 -rw-r--r-- 1 ulefr01 ulefr01 27521 2005-11-23 20:11 Backup&restore.odt
1193942 -rw-r--r-- 1 ulefr01 ulefr01 13535 2005-11-26 16:11 doc.odt
1191933 -rw------- 1 ulefr01 ulefr01 6135 2005-12-06 12:00 fru.odt
1193753 -rw-r--r-- 1 ulefr01 ulefr01 19865 2005-11-23 22:44 harddiskdata.odt
1193576 -rw-r--r-- 1 ulefr01 ulefr01 7198 2005-11-26 21:46 ooo-1.odt
1191749 -rw------- 1 ulefr01 ulefr01 22542 2005-12-06 16:16 Regen.odt
1191898 lrwxrwxrwx 1 ulefr01 ulefr01 15 2005-12-17 22:00 test.odt -> linuxcursus.odt
test.odt verwijst naar een bestand dat niet bestaat !

18-02-2020

21:55

Samsung Galaxy Z Flip, S20(+) en S20 Ultra Hands-on [Laatste Artikelen - Webwereld]

Samsung nodigde ons uit op de drie allernieuwste smartphones van dichtbij te bekijken. Daar maakten wij dankbaar gebruik van en wij delen onze bevindingen met je.

02-02-2020

21:29

Hands-on: Synology Virtual Machine Manager [Laatste Artikelen - Webwereld]

Dat je NAS tegenwoordig voor veel meer dan alleen het opslaan van bestanden kan worden gebruikt is inmiddels bekend, maar wist je ook dat je er virtuele machines mee kan beheren? Wij leggen je uit hoe.

23-01-2020

16:42

Wat je moet weten over FIDO-sleutels [Laatste Artikelen - Webwereld]

Dankzij de FIDO2-standaard is het mogelijk om zonder wachtwoord toch veilig in te loggen bij diverse online diensten. Onder meer Microsoft en Google bieden hier al opties voor. Dit jaar volgen er waarschijnlijk meer organisaties die dit aanbieden.

Zo gebruik je je iPhone zonder Apple ID [Laatste Artikelen - Webwereld]

Tegenwoordig moet je voor zo’n beetje alles wat je online wilt doen een account aanmaken, zelfs als je niet van plan bent online te werken of als je gewoon geen zin hebt om je gegevens te delen met de fabrikant. Wij laten je vandaag zien hoe je dat voor elkaar krijgt met je iPhone of iPad.

Groot lek in Internet Explorer wordt al misbruikt in het wild [Laatste Artikelen - Webwereld]

Er is een nieuwe zero-day-kwetsbaarheid ontdekt in Microsoft Internet Explorer. Het nieuwe lek wordt al misbruikt en een beveiligingsupdate is nog niet beschikbaar.

Zo installeer je Chrome-extensies in de nieuwe Edge [Laatste Artikelen - Webwereld]

De nieuwe versie van Edge is gebouwd met code van het Chromium-project, maar in de standaardconfiguratie worden extensies uitsluitend geïnstalleerd via de Microsoft Store. Dat is gelukkig vrij eenvoudig aan te passen.

19-01-2020

12:59

Windows 10-upgrade nog steeds gratis [Laatste Artikelen - Webwereld]

Microsoft gaf gebruikers enkele jaren geleden de mogelijkheid gratis te upgraden van Windows 7 naar Windows 10. Daarbij ging het af en toe zo ver dat zelfs gebruikers die dat niet wilden een upgrade kregen. De aanbieding is al lang en breed voorbij, maar gratis upgraden is nog steeds mogelijk en het is nu makkelijker dan ooit. Wij vertellen je hoe je dat doet.

Chrome, Edge, Firefox: Welke browser is het snelst? [Laatste Artikelen - Webwereld]

Er is veel veranderd op de markt voor pc-browsers. Ongeveer vijf jaar geleden was er nog meer concurrentie en geheel eigen ontwikkeling, nu zijn er nog maar twee engines over: die achter Chrome en die achter Firefox. Met de release van de Blink-gebaseerde Edge van Microsoft deze maand kijken we naar benachmarks en praktijktests.

Cooler Master herontwerpt koelpasta-tubes wegens drugsverdenkingen [Laatste Artikelen - Webwereld]

Cooler Master heeft het uiterlijk van z’n koelpasta-spuiten aangepast omdat het bedrijf het naar eigen zeggen beu is om steeds te moeten uitleggen aan ouders dat de inhoud geen drugs is, maar koelpasta.

06-03-2018

19-09-2017

10:33

Embedded Linux Engineer [Job Openings]

You're eager to work with Linux in an exciting environment. You have a lot of PC equipement experience. Prior experience with embedded Linux or small footprint distributions is considered a plus. Region East/West Flanders

Linux Teacher [Job Openings]

We're looking for someone capable of teaching Linux and/or Solaris professionally. Ideally the candidate has experience with teaching in Linux, possibly other non-Windows OSes as well.

Kernel Developer [Job Openings]

We're looking for someone with kernel device driver developement experience. Preferably, but not necessary with knowledge of AV or TV devices.

C/C++ Developers [Job Openings]

We're searching Linux C/C++ Developers. Region Leuven.

Feeds

FeedRSSLast fetchedNext fetched after
Computable XML 07-10-2024, 17:01 07-10-2024, 20:01
GNOMON XML 07-10-2024, 17:01 07-10-2024, 20:01
http://www.h-online.com/news/atom.xml XML 07-10-2024, 17:01 07-10-2024, 20:01
https://www.heise.de/en XML 07-10-2024, 17:01 07-10-2024, 20:01
Job Openings XML 07-10-2024, 17:01 07-10-2024, 20:01
Laatste Artikelen - Webwereld XML 07-10-2024, 17:01 07-10-2024, 20:01
linux blogs franz ulenaers XML 07-10-2024, 17:01 07-10-2024, 20:01
Linux Journal - The Original Magazine of the Linux Community XML 07-10-2024, 17:01 07-10-2024, 20:01
Linux Today XML 07-10-2024, 17:01 07-10-2024, 20:01
OMG! Ubuntu! XML 07-10-2024, 17:01 07-10-2024, 20:01
Planet Python XML 07-10-2024, 17:01 07-10-2024, 20:01
Press Releases Archives - The Document Foundation Blog XML 07-10-2024, 17:01 07-10-2024, 20:01
Simple is Better Than Complex XML 07-10-2024, 17:01 07-10-2024, 20:01
Slashdot: Linux XML 07-10-2024, 17:01 07-10-2024, 20:01
Tech Drive-in XML 07-10-2024, 17:01 07-10-2024, 20:01
ulefr01 - blog franz ulenaers XML 07-10-2024, 17:01 07-10-2024, 20:01

Laatst gewijzigd: maandag 7 oktober 2024 17:02
Copyright � 2023 - Franz Ulenaers (email : franz.ulenaers@telenet.be)