LibreOffice 24.8.1, the first minor release of the recently announced LibreOffice 24.8 family, is available for download [Press Releases Archives - The Document Foundation Blog]
The LibreOffice 24.8 family is optimised for the privacy-conscious office suite user who wants full control over the information they share
Berlin, 12 September 2024 – LibreOffice 24.8.1, the first minor release of the LibreOffice 24.8 family of the free, volunteer-supported office suite for Windows (Intel, AMD and ARM), macOS (Apple and Intel) and Linux, is available at www.libreoffice.org/download. For users who don’t need the latest features and prefer a more tested version, TDF maintains the previous LibreOffice 24.2 family, with several months of back-ported fixes. The current version is LibreOffice 24.2.6.
LibreOffice is the only software for creating documents that contain personal or confidential information that respects the privacy of the user – ensuring that the user is able to decide if and with whom to share the content they create. As such, LibreOffice is the best option for the privacy-conscious office suite user, and offers a feature set comparable to the leading product on the market.
In addition, LibreOffice offers a range of interface options to suit different user habits, from traditional to modern, and makes the most of different screen sizes by optimising the space available on the desktop to put the maximum number of features just a click or two away.
The biggest advantage over competing products is the LibreOffice Technology Engine, the single software platform on which desktop, mobile and cloud versions of LibreOffice – including those from ecosystem companies – are based. This allows LibreOffice to provide a better user experience and to produce identical and fully interoperable documents based on the two available ISO standards: the Open Document Format (ODT, ODS and ODP) and the proprietary Microsoft OOXML (DOCX, XLSX and PPTX). The latter hides a great deal of artificial complexity, which can cause problems for users who are confident that they are using a true open standard.
End users looking for support will be helped by the immediate availability of the LibreOffice 24.8 Getting Started Guide, which can be downloaded from the following link: books.libreoffice.org. In addition, they will be able to get first-level technical support from volunteers on the user mailing lists and the Ask LibreOffice website: ask.libreoffice.org.
A short video highlighting the main new features is available on YouTube and PeerTube peertube.opencloud.lu/w/ibmZUeRgnx9bPXQeYUyXTV.
Please confirm that you want to play a YouTube video. By accepting, you will be accessing content from YouTube, a service provided by an external third party.
If you accept this notice, your choice will be saved and the page will refresh.
LibreOffice for Enterprise
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: www.libreoffice.org/download/libreoffice-in-business/.
Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and improves the LibreOffice technology platform. Products based on LibreOffice Technology are available for all major desktop operating systems (Windows, macOS, Linux and ChromeOS), mobile platforms (Android and iOS) and the cloud.
The Document Foundation has developed a migration protocol to help companies move from proprietary office suites to LibreOffice, based on the provision of an LTS (long-term support) enterprise-optimised version of LibreOffice, plus migration consulting and training provided by certified professionals who offer value-added solutions that are consistent with proprietary offerings. Reference: www.libreoffice.org/get-help/professional-support/.
In fact, LibreOffice’s mature code base, rich feature set, strong support for open standards, excellent compatibility and LTS options from certified partners make it the ideal solution for organisations looking to regain control of their data and break free from vendor lock-in.
LibreOffice 24.8.1 availability
LibreOffice 24.8.1 is available from www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 (no longer supported by Microsoft) and Apple macOS 10.15. Products based on LibreOffice technology for Android and iOS are listed at www.libreoffice.org/download/android-and-ios/.
LibreOffice users, free software advocates and community members can support The Document Foundation by making a donation at www.libreoffice.org/donate.
23 Best Free and Open Source GUI Internet Radio Software [Linux Today]
Here’s our verdict on the best GUI-based internet radio software.
The post 23 Best Free and Open Source GUI Internet Radio Software appeared first on Linux Today.
CachyOS October 2024 Update Brings Enhanced AMD Support [Linux Today]
Arch-based CachyOS’s October ’24 update fixes AMD GPU issues, improve the KDE Wayland session, and upgrades Python and Mesa.
The post CachyOS October 2024 Update Brings Enhanced AMD Support appeared first on Linux Today.
Zato Blog: API Testing in Pure English [Planet Python]
Do you have 20 minutes to learn how to test APIs in pure English, without any programming needed?
Great, the API testing tutorial is here.
Right after you complete it, you'll be able to write API tests as the one below.
➤ Read about how to use Python to build and integrate enterprise APIs that your tests will cover
➤ Python API integration tutorial
➤ Python Integration platform as a Service (iPaaS)
➤ What is an Enterprise Service Bus (ESB)? What is SOA?
Pine64's Linux-Powered E-Ink Tablet is Making a Return [Slashdot: Linux]
"Pine64 has confirmed that its open-source e-ink tablet is returning," reports the blog OMG Ubuntu: The [10.1-inch e-ink display] PineNote was announced in 2021, building on the success of its non-SBC devices like the PinePhone (and later Pro model), the PineTab, and PineBook devices. Like most of Pine64's devices, software support is largely tackled by the community. But only a small batch of developer units were ever sold, primarily by enthusiasts within the open-source community who had the knowledge and desire to work on getting a modern Linux OS to run on the hardware, and adapt to the e-ink display. That process has taken a while, as Pine64's community bloggers explain: "The PineNote was stuck in a chicken-and-egg situation because of the very high cost of manufacturing the device (ePaper screens are sadly still expensive), and so the risk of manufacturing units that then didn't have a working Linux OS and would not sell was huge." However, the proverbial egg has finally hatched. The PineNote now has a reliable Debian-based OS, developed by Maximilian Weigand. This is described as "not only a bare-bones capable OS but a genuinely daily-usable system that 'just works'" according to the Pine64 blog. ["This is excellent as it also moves the target audience from developers to every day users. You should be able to power on the device and drop into a working Gnome experience."] It is said to use the GNOME desktop plus a handful of extensions designed to ensure the UI adapts to working well with an e-ink display. Software pre-installed includes Xournal++ for note taking, Firefox for web browsing, and Foliate for reading ebooks, among others. [And it even runs Doom...] Existing PineNote owners can download the the new OS image, flash it to their device, and help test it... Touch and stylus input are major selling points of the PineNote, positioning it as a libre alternative to leading e-ink note-taking devices like the Remarkable 2, Onyx BOOX, and Amazon Scribe. "I do not (yet) have a launch date target," according to the blog post, "as behind-the-scenes the Pine Store team are still working on all things production." But the update also links to some blog posts about their free and open source smartwatch PineTime...
Read more of this story at Slashdot.
Ardour 8.8 (Open Source DAW) Drops Fresh Fixes & Features [OMG! Ubuntu!]
Ardour is one of the most popular and powerful open-source digital audio workstations (DAW) around, and a major new update was recently made available. Now, I can’t profess to be some kind of music-making maestro, though I did spend much of my late teens face-first in FL Studio (formerly Fruity Loops) trying – and failing – to channel my inner Cash Cash (’08 ‘era, before their mainstream genre shift). Ardour 8.8 is the second update to the DAW in 2 weeks because, as the software’s devs explain, “v8.7 […] turned out to have a couple of major issues that required […]
You're reading Ardour 8.8 (Open Source DAW) Drops Fresh Fixes & Features, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
NetworkManager 1.50 Released, Supports veth Config in Terminal UI [OMG! Ubuntu!]
A new version of NetworkManager – used by most Linux distributions (including Ubuntu) to manage wired and wireless network connections – was released this week. NetworkManager 1.50 won’t be included in Ubuntu 24.10 (that ships with v1.48) but I think some of the changes it makes may be worth knowing about all the same. Notably, NetworkManager 1.50 now formally deprecates support for dhclient in favour of its own internal DHCP client. The former is now no longer be built “…unless explicitely (sic) enabled, and will be removed in a future release.” Will this have a major issue? Unlikely; NetworkManager began […]
You're reading NetworkManager 1.50 Released, Supports veth Config in Terminal UI, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Mozilla’s New Logo Looks Even Better Animated [OMG! Ubuntu!]
A few months ago I reported that Mozilla is getting a brand revamp and that it incorporates the non-profit company’s iconic red dinosaur mascot – now I have a bit more info. A reader, Nicolas, recently pointed me to the website of global design agency Jones Knowles Ritchie, who Mozilla hired to update, refine, and revitalise its brand identity. As design agencies go, Jones Knowles Ritchie has considerable cultural cache having worked with major world-famous brands, ranging from Burger King to Budweiser – and now web browser maker Mozilla. Their website has a dedicated page to showcase their work on […]
You're reading Mozilla’s New Logo Looks Even Better Animated, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Pine64’s Linux-Powered E-Ink Tablet is Making a Return [OMG! Ubuntu!]
Pine64 has confirmed that its open-source e-ink tablet is returning. The PineNote was announced in 2021, building on the success of its non-SBC devices like the PinePhone (and later Pro model), the PineTab, and PineBook devices. Like most of Pine64’s devices, software support is largely tackled by the community. But only a small batch of developer units were ever sold, primarily by enthusiasts within the open-source community who had the knowledge and desire to work on getting a modern Linux OS to run on the hardware, and adapt to the e-ink display. That process has taken a while, as Pine64’s […]
You're reading Pine64’s Linux-Powered E-Ink Tablet is Making a Return, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
How to Install MongoDB on AlmaLinux 9 [Linux Today]
MongoDB is an open-source, cross-platform, and distributed NoSQL (Non-SQL or Non-Relational) database system. This guide will show you how to install MongoDB on an Alma Linux 9 server.
The post How to Install MongoDB on AlmaLinux 9 appeared first on Linux Today.
Ardour 8.8 DAW Launches with Hot-Fixes and New Features [Linux Today]
Ardour 8.8 Digital Audio Workstation introduces new features, including track dragging, ruler changes, and powerful parallel disk I/O.
The post Ardour 8.8 DAW Launches with Hot-Fixes and New Features appeared first on Linux Today.
Julien Tayon: Bidirectionnal python/tk by talking to tk interpreter back and forth [Planet Python]
Last time I exposed an old way learned in physical labs to do C or python/tk like in the old days: by summoning a tcl/tk interpreter and piping commands to it.
But what fun is it?
It's funnier if the tcl/tk interperpreter talks back to python :D as an hommage to the 25 years awaited TK9 versions that solves a lot of unicode trouble.
Beforehand, to make sense to the code a little warning is required : this code targets only POSIX environment and loses portability because I chose to use a way that is not the « one best way » for enabling bidirectionnal talks.
By using os.set_blocking(p.stdout.fileno(), False) we can have portable non blocking IO, which means this trick has been tested on linux, freeBSD and windows successfully.
First and foremost, the Popen now use p.stdout=PIPE enabling the channel on which tcl will talk. As a joke puts/gets are named from tcl/tk functions and are used in python to push/get strings from tcl.
Instead of using multithreading having one thread listen to the output and putting the events in a local queue that the main thread will consume I chose the funniest technique of setting tcl/tk output non blocking which does not work on windows. This is the fnctl part of the code.
Then, I chose not to parse the output of tcl/tk but exec it, making tcl/tk actually push python commands back to python. That's the exec part of the code.
For this I needed an excuse : so I added buttons to change minutes/hours back and forth.
That's the moment we all are gonna agree that tcl/tk that tcl/tk biggest sin is its default look. Don't worry, next part is about using themes.
Compared to the first post, changes are minimal :D
This is how it should look :
#!/usr/bin/env python from subprocess import Popen, PIPE from time import sleep, time, localtime # import fcntl import os # let's talk to tk/tcl directly through p.stdin p = Popen(['wish'], stdin=PIPE, stdout=PIPE) # best non portable answer on stackoverflow #fd = p.stdout.fileno() #flag = fcntl.fcntl(fd, fcntl.F_GETFL) #fcntl.fcntl(fd, fcntl.F_SETFL, flag | os.O_NONBLOCK) # ^-- this 3 lines can be replaced with this one liner --v # portable non blocking IO os.set_blocking(p.stdout.fileno(), False) def puts(s): for l in s.split("\n"): p.stdin.write((l + "\n").encode()) p.stdin.flush() def gets(): ret=p.stdout.read() p.stdout.flush() return ret WIDTH=HEIGHT=400 puts(f""" canvas .c -width {WIDTH} -height {HEIGHT} -bg white pack .c . configure -background white ttk::button .ba -command {{ puts ch-=1 }} -text << pack .ba -side left -anchor w ttk::button .bb -command {{ puts cm-=1 }} -text < pack .bb -side left -anchor w ttk::button .bc -command {{ puts ch+=1 }} -text >> pack .bc -side right -anchor e ttk::button .bd -command {{ puts cm+=1 }} -text > pack .bd -side right -anchor e """) # Constant are CAPitalized in python by convention from cmath import pi as PI, e as E ORIG=complex(WIDTH/2, HEIGHT/2) # correcting python notations j => I I = complex("j") rad_per_sec = 2.0 * PI /60.0 rad_per_min = rad_per_sec / 60 rad_per_hour = rad_per_min / 12 origin_vector_hand = WIDTH/2 * I size_of_sec_hand = .9 size_of_min_hand = .8 size_of_hour_hand = .65 rot_sec = lambda sec : -E ** (I * sec * rad_per_sec ) rot_min = lambda min : -E ** (I * min * rad_per_min ) rot_hour = lambda hour : -E ** (I * hour * rad_per_hour ) to_real = lambda c1,c2 : "%f %f %f %f" % (c1.real,c1.imag,c2.real, c2.imag) for n in range(60): direction= origin_vector_hand * rot_sec(n) start=.9 if n%5 else .85 puts(f".c create line {to_real(ORIG+start*direction,ORIG+.95*direction)}") sleep(.01) diff_offset_in_sec = (time() % (24*3600)) - \ localtime()[3]*3600 -localtime()[4] * 60.0 \ - localtime()[5] ch=cm=0 while True: # eventually parsing tcl output back = gets() # trying is more concise than checking try: back = back.decode() exec(back) except Exception as e: pass t = time() s= t%60 m = m_in_sec = t%(60 * 60) + cm * 60 h = h_in_sec = (t- diff_offset_in_sec)%(24*60*60) + ch * 3600 + cm * 60 puts(".c delete second") puts(".c delete minute") puts(".c delete hour") c0=ORIG+ -.1 * origin_vector_hand * rot_sec(s) c1=ORIG+ size_of_sec_hand * origin_vector_hand * rot_sec(s) puts( f".c create line {to_real(c0,c1)} -tag second -fill blue -smooth true") c1=ORIG+size_of_min_hand * origin_vector_hand * rot_min(m) puts(f".c create line {to_real(ORIG, c1)} -tag minute -fill green -smooth true") c1=ORIG+size_of_hour_hand * origin_vector_hand * rot_hour(h) puts(f".c create line {to_real(ORIG,c1)} -tag hour -fill red -smooth true") puts("flush stdout") sleep(.1)
Talk Python to Me: #479: Designing Effective Load Tests for Your Python App [Planet Python]
You're about to launch your new app or API, or even just a big refactor of your current project. Will it stand up and deliver when you put it into production or when that big promotion goes live? Or will it wither and collapse? How would you know? Well you would test that of course. We have Anthony Shaw back on the podcast to dive into a wide range of tools and techniques for performance and loading testing of web apps.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/workos'>WorkOS</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Anthony on Twitter</b>: <a href="https://twitter.com/anthonypjshaw?featured_on=talkpython" target="_blank" >@anthonypjshaw</a><br/> <b>Anthony's PyCon Au Talk</b>: <a href="https://www.youtube.com/watch?v=or3PbMGMz4o" target="_blank" >youtube.com</a><br/> <b>locust load testing tool</b>: <a href="https://locust.io?featured_on=talkpython" target="_blank" >locust.io</a><br/> <b>playwright</b>: <a href="https://playwright.dev?featured_on=talkpython" target="_blank" >playwright.dev</a><br/> <b>mimesis</b>: <a href="https://github.com/lk-geimfari/mimesis?featured_on=talkpython" target="_blank" >github.com</a><br/> <b>mimesis providers</b>: <a href="https://mimesis.name/en/master/providers.html?featured_on=talkpython" target="_blank" >mimesis.name</a><br/> <b>vscode pets</b>: <a href="https://marketplace.visualstudio.com/items?itemName=tonybaloney.vscode-pets&featured_on=talkpython" target="_blank" >marketplace.visualstudio.com</a><br/> <b>vscode power-mode</b>: <a href="https://marketplace.visualstudio.com/items?itemName=hoovercj.vscode-power-mode&featured_on=talkpython" target="_blank" >marketplace.visualstudio.com</a><br/> <b>opentelemetry</b>: <a href="https://opentelemetry.io?featured_on=talkpython" target="_blank" >opentelemetry.io</a><br/> <b>uptime-kuma</b>: <a href="https://github.com/louislam/uptime-kuma?featured_on=talkpython" target="_blank" >github.com</a><br/> <b>Talk Python uptime / status</b>: <a href="https://talkpython.fm/status" target="_blank" >talkpython.fm/status</a><br/> <b>when your serverless computing bill goes parabolic...</b>: <a href="https://www.youtube.com/watch?v=SCIfWhAheVw" target="_blank" >youtube.com</a><br/> <b>Watch this episode on YouTube</b>: <a href="https://www.youtube.com/watch?v=W6UVq8zVtxU" target="_blank" >youtube.com</a><br/> <b>Episode transcripts</b>: <a href="https://talkpython.fm/episodes/transcript/479/designing-effective-load-tests-for-your-python-app" target="_blank" >talkpython.fm</a><br/> <br/> <b>--- Stay in touch with us ---</b><br/> <b>Subscribe to us on YouTube</b>: <a href="https://talkpython.fm/youtube" target="_blank" >youtube.com</a><br/> <b>Follow Talk Python on Mastodon</b>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <b>Follow Michael on Mastodon</b>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>
Mariatta: Python Core Sprint 2024: Day 5 [Planet Python]
I reviewed some issues that came to the CPython repo. There were a few interesting tickets related to the datetime
module. These issues were discovered by Hypothesis, a property-based testing tool for Python.
I’ve been hearing a lot about Hypothesis, but never really used it in production or at work. I watched a talk about it
at PyCon US many years ago, and I even had ice cream selfie with Zac who
maintains Hypothesis. Anyway, I’ve just been interested in learning more about Hypothesis and how it could solve issues
not caught by other testing methods, and I think this is one of the perks of contributing to open source: getting exposed
to things you don’t normally use at work, and I think it’s a great way to learn new things.
Seth Michael Larson: EuroPython 2024 talks about security [Planet Python]
EuroPython 2024 talks about security
EuroPython 2024 which occurred back in July 2024 has published the talk recordings to YouTube earlier this week. I've been under the weather for most of this week, but have had a chance to listen to a few of the security-related talks in-between resting.
This talk was delivered by Python Software Foundation Executive Director Deb Nicholson and and Board Member Cheuk Ting Ho. The Cyber Resilience Act (CRA) is coming, and it'll affect more software than just the software written in the EU. Deb and Cheuk describe the recent developments in the CRA like the creation of a new entity called the "Open Source Steward" and how open source foundations and maintainers are preparing for the CRA.
For the rest of this year and next year I am focusing on getting the Python ecosystem ready for software security regulations like the CRA and SSDF from the United States.
Starting with improving the Software Bill-of-Materials (SBOM) story for Python, because this is required by both (and likely, future) regulations. Knowing what software you are running is an important first step towards being able to secure that same software.
To collaborate with other open source foundations and projects on this work, I've joined the Open Regulatory Compliance Working Group hosted by the Eclipse Foundation.
This talk was given by Karolina Surma and it detailed all the work that goes into researching, writing, and having a Python packaging standard accepted (spoiler: it's a lot!). Karolina is working on PEP 639 which is for adopting the SPDX licensing expression and identifier standards in Python as they are the current state of the art for modeling complex licensing situations accurately for machine (and human) consumption.
This work is very important for Software Bill-of-Materials, as they require accurate license information in this exact format. Thanks to Karolina, C.A.M. Gerlach, and many others for working for years on this PEP, it will be useful to so many uers once adopted!
This talk was given by Kairo de Araujo and Lukas Pühringer and it detailed the history and current status of The Update Framework (TUF) integration into the Python Package Index.
TUF provides better integrity guarantees for software repositories like PyPI like making it more difficult to "compel" the index to serve the incorrect artifacts and to make a compromise of PyPI easier to roll-back and be certain that files hadn't been modified. For a full history and latest status, you can view PEP 458 and the top-level GitHub issue for Warehouse.
I was around for the original key-signing ceremony for the PyPI TUF root keys which was live-streamed back in October 2020. Time flies, huh.
This talk was given by Jakub Beránek about using type hints for more robust Python code. Having written a case-study on urllib3's adoption of type hints to find defects that testing and other tooling missed I highly recommend type hints for Python code as well:
This talk was given by Roshan R Chandar about using PyO3 and Rust in Python modules.
This talk was given by Facundo Tuesca on using Trusted Publishing for authenticating with PyPI to publish packages.
This talk was given by Jose Haro Peralta on how to design and implement secure web APIs using Python, data validation with Pydantic, and testing your APIs using tooling for detecting common security defects.
This talk was given by Cira Carey which highlights many of today's threats targetting open source consumers. Users should be aware of these when selecting projects to download and install.
︎Have thoughts or questions? Let's chat with email or social:
Want more articles like this one? Get notified of new posts by subscribing to the RSS feed or the email newsletter. Promise not to share your email or send you spam.
Find a typo? This blog is open source, pull requests are welcome.
Thanks for reading! ♡ This work is licensed under CC BY-SA 4.0
How to Install Odoo 18 on Ubuntu 24.04 [Linux Today]
Odoo 18 is an open-source suite of business applications that provides a complete ERP (Enterprise Resource Planning) solution for organizations of various sizes. It offers a wide range of integrated tools and modules to help manage all aspects of a business, such as finance, sales, inventory, human resources, and more.
The open-source community edition is free, making it accessible to small businesses and developers. The enterprise edition, on the other hand, offers additional features, services, and support.
Odoo is highly customizable. Businesses can tailor modules to meet their specific needs, create custom workflows, or build entirely new apps using Odoo’s development framework.
In summary, Odoo is a versatile business management software that can streamline operations and provide real-time insights, making it an ideal solution for companies looking to optimize their business processes.
In this tutorial, we will show you how to install Odoo 18 on a Ubuntu 24.04 OS.
The post How to Install Odoo 18 on Ubuntu 24.04 appeared first on Linux Today.
Banana Pi and OpenWrt’s One/AP-24.XY Router Board Hits the Market [Linux Today]
The first official OpenWrt One/AP-24.XY router board goes on sale, featuring MediaTek’s latest SoC with WiFi 6 for enhanced connectivity, priced at $111.
The post Banana Pi and OpenWrt’s One/AP-24.XY Router Board Hits the Market appeared first on Linux Today.
Real Python: Quiz: Iterators and Iterables in Python: Run Efficient Iterations [Planet Python]
In this quiz, you’ll test your understanding of Python’s Iterators and Iterables.
By working through this quiz, you’ll revisit how to create and work with iterators and iterables, understand the differences between them, and review how to use generator functions and the yield
statement.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Thousands of Linux Systems Infected By Stealthy Malware Since 2021 [Slashdot: Linux]
A sophisticated malware strain has infected thousands of Linux systems since 2021, exploiting over 20,000 common misconfigurations and a critical Apache RocketMQ vulnerability, researchers at Aqua Security reported. Dubbed Perfctl, the malware employs advanced stealth techniques, including rootkit installation and process name mimicry, to evade detection. It persists through system reboots by modifying login scripts and copying itself to multiple disk locations. Perfctl hijacks systems for cryptocurrency mining and proxy services, while also serving as a backdoor for additional malware. Despite some antivirus detection, the malware's ability to restart after removal has frustrated system administrators.
Read more of this story at Slashdot.
How to Install Pydio Cells on AlmaLinux 9 [Linux Today]
Pydio Cells is an open-source document-sharing and collaboration platform for your organization. In this guide, we’ll show you how to install Pydio Cells on an Alma Linux 9 server.
The post How to Install Pydio Cells on AlmaLinux 9 appeared first on Linux Today.
Fwupd 2.0 Open-Source Linux Firmware Updater Released with Major Changes [Linux Today]
This new major release beaks the libfwupd ABI to drop legacy signing formats for verification of metadata and firmware, reduce the runtime memory usage and CPU startup cost significantly, remove all the long-deprecated legacy CLI tools, remove libgusb and GUdev from plugins and use libusb and sysfs instead, and stream firmware binaries over a file descriptor rather than into memory.
The post Fwupd 2.0 Open-Source Linux Firmware Updater Released with Major Changes appeared first on Linux Today.
PyGObject: A Guide to Creating Python GUI Applications on Linux [Linux Today]
Creating graphical user interface (GUI) applications is a fantastic way to bring your ideas to life and make your programs more user-friendly.
PyGObject is a Python library that allows developers to create GUI applications on Linux desktops using the GTK (GIMP Toolkit) framework. GTK is widely used in Linux environments, powering many popular desktop applications like Gedit, GNOME terminal, and more.
In this article, we will explore how to create GUI applications under a Linux desktop environment using PyGObject. We’ll start by understanding what PyGObject is, how to install it, and then proceed to building a simple GUI application.
The post PyGObject: A Guide to Creating Python GUI Applications on Linux appeared first on Linux Today.
Fwupd 2.0: Major Changes and New Hardware Support [Linux Today]
Fwupd 2.0 launches with major enhancements: drops old signing formats, adds Darwin support, and revamps device firmware management.
The post Fwupd 2.0: Major Changes and New Hardware Support appeared first on Linux Today.
Julien Tayon: Simpler than PySimpleGUI and python tkinter: talking directly to tcl/tk [Planet Python]
Well, the PySimpleGUI rug pulling of its licence reminded me how much dependencies are not a good thing.
Even though FreeSimpleGUI is a good approach to simpler tk/tcl binding in python : we can do better, especially if your linux distro split the python package and you don't have access to tkinter. I am watching you debian, splitting ALL packages and breaking them including ... tcl from tk (what a crime).
Under debian this stunt requires you to install tk :
apt install tk8.6
#!/usr/bin/env python from subprocess import Popen, PIPE from time import sleep, time, localtime # let's talk to tk/tcl directly through p.stdin p = Popen(['wish'], stdin=PIPE) def puts(s): for l in s.split("\n"): p.stdin.write((l + "\n").encode()) p.stdin.flush() WIDTH=HEIGHT=400 puts(f""" canvas .c -width {WIDTH} -height {HEIGHT} -bg white pack .c . configure -background "white" """) # Constant are CAPitalized in python by convention from cmath import pi as PI, e as E ORIG=complex(WIDTH/2, HEIGHT/2) # correcting python notations j => I I = complex("j") rad_per_sec = 2.0 * PI /60.0 rad_per_min = rad_per_sec / 60 rad_per_hour = rad_per_min / 12 origin_vector_hand = WIDTH/2 * I size_of_sec_hand = .9 size_of_min_hand = .8 size_of_hour_hand = .65 rot_sec = lambda sec : -E ** (I * sec * rad_per_sec ) rot_min = lambda min : -E ** (I * min * rad_per_min ) rot_hour = lambda hour : -E ** (I * hour * rad_per_hour ) to_real = lambda c1,c2 : "%f %f %f %f" % (c1.real,c1.imag,c2.real, c2.imag) for n in range(60): direction= origin_vector_hand * rot_sec(n) start=.9 if n%5 else .85 puts(f".c create line {to_real(ORIG+start*direction,ORIG+.95*direction)}") sleep(.1) diff_offset_in_sec = (time() % (24*3600)) - \ localtime()[3]*3600 -localtime()[4] * 60.0 \ - localtime()[5] while True: t = time() s= t%60 m = m_in_sec = t%(60 * 60) h = h_in_sec = (t- diff_offset_in_sec)%(24*60*60) puts(".c delete second") puts(".c delete minute") puts(".c delete hour") c0=ORIG+ -.1 * origin_vector_hand * rot_sec(s) c1=ORIG+ size_of_sec_hand * origin_vector_hand * rot_sec(s) puts( f".c create line {to_real(c0,c1)} -tag second -fill blue -smooth true") c1=ORIG+size_of_min_hand * origin_vector_hand * rot_min(m) puts(f".c create line {to_real(ORIG, c1)} -tag minute -fill green -smooth true") c1=ORIG+size_of_hour_hand * origin_vector_hand * rot_hour(h) puts(f".c create line {to_real(ORIG,c1)} -tag hour -fill red -smooth true") sleep(.1)Next time as a bonus, I'm gonna do something tkinter cannot do: bidirectional communications (REP/REQ pattern).
Julien Tayon: PySimpleGUI : surviving the rug pull of licence part I [Planet Python]
I liked pySimpleGUI, because as a coder that likes tkinter (the Tk/Tcl bindings) and as a former tcl/tk coder I enoyed the syntaxic sugar that was avoiding all the boiler plates required to build the application.
The main advantage was about not having to remember in wich order to make the pack and having to do the mainloop call. It was not a revolution, just a simple, elegant evolution, hence I was still feeling in control.
However, the projet made a jerk move by relicensing in full proprietary license that requires a key to work functionnaly.
I will not discuss this since the point have been made clearly on python mailing list.
Luckily I want to raise 2 points :
pip install git+https://github.com/jul/PySimpleGUI#egg=pysimpleGUI
pip install FreeSimpleGUIand then in the source :
- import PySimpleGUI as sg + import FreeSimpleGUI as sg
Python Engineering at Microsoft: Python in Visual Studio Code – October 2024 Release [Planet Python]
We’re excited to announce the October 2024 release of the Python and Jupyter extensions for Visual Studio Code!
This release includes the following announcements:
If you’re interested, you can check the full list of improvements in our changelogs for the Python, Jupyter and Pylance extensions.
You can now run Python tests with coverage in VS Code! Test coverage is a measure of how much of your code is covered by your tests, which can help you identify areas of your code that are not being fully tested.
To run tests with coverage enabled, select the coverage run icon in the Test Explorer or the “Run with coverage” option from any menu you normally trigger test runs from. The Python extension will run coverage using the pytest-cov
plugin if you are using pytest, or with coverage.py
for unittest.
Note: Before running tests with coverage, make sure to install the correct testing coverage package for your project.
Once the coverage run is complete, lines will be highlighted in the editor for line level coverage. Test coverage results will appear as a “Test Coverage” sub-tab in the Test Explorer, which you can also navigate to with Testing: Focus on Test Coverage View in Command Palette (F1)
). On this panel you can view line coverage metrics for each file and folder in your workspace.
For more information on running Python tests with coverage, see our Python test coverage documentation. For general information on test coverage, see VS Code’s Test Coverage documentation.
We are excited to announce support for one of our longest request features: there is now a default Python problem matcher! Aiming to simplifying issue tracking in your Python code and offering more contextual feedback, a problem matcher scans the task’s output for errors and warnings and displays them in the Problems panel, enhancing your development workflow. To integrate it, add "problemMatcher": "$python"
to your tasks in task.json
.
Below is an example of a task.json file that uses the default problem matcher for Python:
{
"version": "2.0.0",
"tasks": [
{
"label": "Run Python",
"type": "shell",
"command": "${command:python.interpreterPath}",
"args": [
"${file}"
],
"problemMatcher": "$python"
}
]
}
For more information on tasks and problem matchers, visit VS Code’s Tasks documentation.
There’s a new setting python.analysis.languageServerMode
that enables you to choose between our current IntelliSense experience or a lightweight one that is optimized for performance. If you don’t require the full breadth of IntelliSense capabilities and prefer Pylance to be as resource-friendly as possible, you can set python.analysis.languageServerMode
to light
. Otherwise, to continue with the experience you have with Pylance today, you can leave out the setting entirely or explicitly set it to default
.
This new functionality overrides the default values of the following settings:
Setting | light mode |
default mode |
---|---|---|
“python.analysis.exclude” | [“**”] | [] |
“python.analysis.useLibraryCodeForTypes” | false | true |
“python.analysis.enablePytestSupport” | false | true |
“python.analysis.indexing” | false | true |
The settings above can still be changed individually to override the default values.
The Python extension now includes a python.terminal.shellIntegration.enabled
setting to enable a better terminal experience on MacOS and Linux machines. When enabled, this setting runs a PYTHONSTARTUP
script before you launch the Python REPL in the terminal (for example, by typing and entering python
), allowing you to leverage terminal shell integrations such as command decorations, re-run command and run recent commands.
We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python and Jupyter Notebooks in Visual Studio Code. Some notable changes include:
"python.analysis.aiCodeActions": {"implementAbstractClasses": true}
in your User settings.jsonexecuteCommand
rather than sendText
for the activation command in @vscode#23929We would also like to extend special thanks to this month’s contributors:
uv.lock
to file associations in vscode-python#23991@typescript-eslint/no-explicit-any suppression
in vscode-python#24091Try out these new improvements by downloading the Python extension and the Jupyter extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or ⌘ + ⇧ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.
The post Python in Visual Studio Code – October 2024 Release appeared first on Python.
Real Python: Quiz: Python import: Advanced Techniques and Tips [Planet Python]
In this quiz, you’ll test your understanding of
Python’s import
statement and related topics.
By working through this quiz, you’ll revisit how to use modules in your scripts and import modules dynamically at runtime.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Chris Rose: uv, direnv, and simple .envrc files [Planet Python]
I have adopted uv
for a lot of Python development. I'm also a heavy user of direnv
, which I like as a tool for setting up project-specific environments.
Much like Hynek describes, I've found uv sync
to be fast enough to put into the chdir path for new directories. Here's how I'm doing it.
First, it turns out you can pretty easily define custom direnv functions like the built-in ones (layout python
, etc...). You do this by adding functions to ~/.config/direnv/direnvrc
or in ~/.config/direnv/lib/
as shell scripts. I use this extensively to make my .envrc
files easier to maintain and smaller. Now that I'm using uv
here is my default for python:
function use_standard-python() {
source_up_if_exists
dotenv_if_exists
source_env_if_exists .envrc.local
use venv
uv sync
}
Let me explain each of these commands and why they are there:
source_up_if_exists
-- this direnv stdlib function is here because I often group my projects into directories with common configuration. For example, when working on Chicon 8, I had a top level .envrc
that set up the AWS configuration to support deploying Wellington and the Chicon 8 website. This searches up til it finds a .envrc
in a higher directory, and uses that. source_up
is the noisier, less-adaptable sibling.
dotenv_if_exists
-- this loads .env
from the current working directory. 12-factor apps often have environment-driven configuration, and docker compose
uses them relatively seamlessly as well. Doing this makes it easier to run commands from my shell that behave like my development environment.
source_env_if_exists .envrc.local
-- sometimes you need more complex functionality in a project than just environment variables. Having this here lets me use .envrc.local
for that. This comes after .env
because sometimes you want to change those values.
use venv
-- this is a function that activates the project .venv
(creating it if needed); I'm old and set in my ways, and I prefer . .venv/bin/activate.fish
in my shell to the more newfangled "prefix it with a runner" mode.
uv sync
-- this is a super fast, "install my development and main dependencies" command. This was way, way too slow with pip
, pip-tools
, poetry
, pdm
, or hatch
, but with uv
, I don't mind having this in my .envrc
With this set up in direnv's configuration, all I need in my .envrc
file is this:
use standard-python
I've been using this pattern for a while now; it lets me upgrade how I do default Python setups, with project specific settings, easily.
Parabolic (Video Downloader) Rewritten in C++, Adjusts UI [OMG! Ubuntu!]
There are plenty of ways to download videos from well-known video streaming sites on Ubuntu but I find Parabolic the easiest, least hassle option out there. For those yet to hear about it, Parabolic is a GTK4/libadwaita app for Linux (or a Qt one for Windows) that offers what it describes as a ‘basic frontend’ to yt-dlp. All sites supported by yt-dlp are supported in this app. Paste in a URL, validate, and download. Parabolic lets you download multiple videos simultaneously and save them to popular video or audio formats; sign-in with account details (if needed) and see the credentials to […]
You're reading Parabolic (Video Downloader) Rewritten in C++, Adjusts UI, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Audacious 4.4.1 Released with Assorted Minor Improvements [OMG! Ubuntu!]
A chorus of improvements are on offer in the newest update to the popular open source, cross-platform Audacious music player. Audacious 4.4.1 builds on the changes introduced in Audacious 4.4 (a release that brought GTK3 and Qt6 UI choices, the return of a dedicated lyrics plugin, and better compatibility with PipeWire) rather than adding any huge new features of its own. But that’s no bad thing; finesse, fix ’em ups, and extended support for existing features are as welcome as gaudy new GUI elements to me. Notable changes include: The change-log also says the PulseAudio plugin is now preferred over […]
You're reading Audacious 4.4.1 Released with Assorted Minor Improvements, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Trey Hunner: Switching from virtualenvwrapper to direnv, Starship, and uv [Planet Python]
Earlier this week I considered whether I should finally switch away from virtualenvwrapper to using local .venv
managed by direnv.
I’ve never seriously used direnv, but I’ve been hearing Jeff and Hynek talk about their use of direnv for a while.
After a few days, I’ve finally stumbled into a setup that works great for me. I’d like to note the basics of this setup as well as some fancy additions that are specific to my own use case.
First, I’d like to note my old workflow that I’m trying to roughly recreate:
mkvenv3 <project_name>
to create a new virtual environment for the current project directory and activate itworkon <project_name>
when I want to workon that project: this activates the correct virtual environment and changes to the project directoryThe initial setup I thought of allows me to:
echo layout python > .envrc && direnv allow
to create a virtual environment for the current project and activate itThe more complex setup I eventually settled on allows me to:
venv <project_name>
to create a virtual environment for the current project and activate itworkon <project_name>
to change directories into the project (which automatically activates the virtual environment)First, I installed direnv and added this to my ~/.zshrc
file:
1
|
|
Then whenever I wanted to create a virtual environment for a new project I created a .envrc
file in that directory, which looked like this:
1
|
|
Then I ran direnv allow
to allow, as direnv
instructed me to, to allow the new virtual environment to be automatically created and activated.
That’s pretty much it.
Unfortunately, I did not like this initial setup.
The first problem was that the virtual environment’s prompt didn’t show up in my shell prompt.
This is due to a direnv not allowing modification of the PS1
shell prompt.
That means I’d need to modify my shell configuration to show the correct virtual environment name myself.
So I added this to my ~/.zshrc
file to show the virtual environment name at the beginning of my prompt:
1 2 3 4 5 6 7 |
|
The next problem was that the virtual environment was placed in .direnv/python3.12
.
I wanted each virtual environment to be in a .venv
directory instead.
To do that, I made a .config/direnv/direnvrc
file that customized the python layout:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
I also didn’t like the loading and unloading messages that showed up each time I changed directories.
I removed those by clearing the DIRENV_LOG_FORMAT
variable in my ~/.zshrc
configuration:
1
|
|
I don’t like it when all my virtual environment prompts show up as .venv
.
I want ever prompt to be the name of the actual project… which is usually the directory name.
I also really wanted to be able to type venv
to create a new virtual environment, activate it, and create the .envrc
file for my automatically.
Additionally, I thought it would be really handy if I could type workon <project_name>
to change directories to a specific project.
I made two aliases in my ~/.zshrc
configuration for all of this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
|
Now I can type this to create a .venv
virtual environment in my current directory, which has a prompt named after the current directory, activate it, and create a .envrc
file which will automatically activate that virtual environment (thanks to that ~/.config/direnv/direnvrc
file) whenever I change into that directory:
1
|
|
If I wanted to customized the prompt name for the virtual environment, I could do this:
1
|
|
When I wanted to start working on that project later, I can either change into that directory or if I’m feeling lazy I can simply type:
1
|
|
That reads from my ~/.projects
file to look up the project directory to switch to.
I also decided to try using uv for all of this, since it’s faster at creating virtual environments.
One benefit of uv
is that it tries to select the correct Python version for the project, if it sees a version noted in a pyproject.toml
file.
Another benefit of using uv
, is that I should also be able to update the venv
to use a specific version of Python with something like --python 3.12
.
Here are the updated shell aliases for the ~/.zshrc
for uv
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
I also decided to try out using Starship to customize my shell this week.
I added this to my ~/.zshrc
:
1
|
|
And removed this, which is no longer needed since Starship will be managing the shell for me:
1 2 3 4 5 6 7 |
|
I also switched my python
layout for direnv to just set the $VIRTUAL_ENV
variable and add the $VIRTUAL_ENV/bin
directory to my PATH
, since the $VIRTUAL_ENV_PROMPT
variable isn’t needed for Starship to pick up the prompt:
1 2 3 4 5 |
|
I also made a very boring Starship configuration in ~/.config/starship.toml
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
I setup such a boring configuration because when I’m teaching, I don’t want my students to be confused or distracted by a prompt that has considerably more information in it than their default prompt may have.
The biggest downside of switching to Starship has been my own earworm-oriented brain. As I update my Starship configuration files, I’ve repeatedly heard David Bowie singing “I’m a Starmaaan”. 🎶
After all of that, I realized that I could additionally use different Starship configurations for different directories by putting a STARSHIP_CONFIG
variable in specific layouts.
After that realization, I made my configuration even more vanilla and made some alternative configurations in my ~/.config/direnv/direnvrc
file:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Those other two configuration files are fancier, as I have no concern about them distracting my students since I’ll never be within those directories while teaching.
You can find those files in my dotfiles repository.
So I replaced virtualenvwrapper with direnv, uv, and Starship. Though direnv was is doing most of the important work here. The use of uv and Starship were just bonuses.
I am also hoping to eventually replace my pipx use with uv and once uv supports adding python3.x commands to my PATH
, I may replace my use of pyenv with uv as well.
Thanks to all who participated in my Mastodon thread as I fumbled through discovering this setup.
Tryton News: Security Release for issue #93 [Planet Python]
Cédric Krier has found that python-sql does not escape non-Expression for unary operators (like And
and Or
) which makes any system exposing those vulnerable to an SQL injection attack.
There is no known workaround.
All affected users should upgrade python-sql
to the latest version.
Affected versions: <= 1.5.1
Non affected versions: >= 1.5.2
Any security concerns should be reported on the bug-tracker at https://bugs.tryton.org/python-sql with the confidential checkbox checked.
1 post - 1 participant
How to Set Up a Debian Development Environment [Linux Journal - The Original Magazine of the Linux Community]
Setting up a development environment is a crucial step for any programmer or software developer. Whether you’re building web applications, developing software, or diving into system programming, having a well-configured environment can make all the difference in your productivity and the quality of your work. This article aims to guide you through the process of setting up a Debian development environment, leveraging the stability and versatility that Debian offers.
Debian is renowned for its stability, security, and vast software repositories, making it a favored choice for developers. This guide will walk you through the steps of setting up a Debian development environment, covering everything from installation to configuring essential tools and programming languages. By the end, you’ll have a robust setup ready for your next project.
Before you begin, ensure that your hardware meets the following minimum specifications:
Debian Installation Media: You'll need the ISO file of the Debian distribution, which you can download from the official Debian website.
Basic Understanding of the Linux Command Line: Familiarity with command-line operations will be beneficial, as many steps will involve terminal commands.
Navigate to the Debian download page and choose the version that suits your needs. The Stable version is recommended for most users due to its reliability.
Creating a Bootable USBTo install Debian, you will need to create a bootable USB drive. Here are some tools you can use:
To create the USB, follow these steps using balenaEtcher as an example:
Booting from USB: Restart your computer and boot from the USB drive. This typically involves pressing a key like F2, F12, or Del during startup to access the boot menu.
Exploring Network Dynamics with NetworkX on Linux [Linux Journal - The Original Magazine of the Linux Community]
In the age of data, understanding complex relationships within networks—ranging from social interactions to infrastructure systems—is more crucial than ever. Network analysis provides a set of techniques and tools for exploring these relationships, offering insights into the structure and dynamics of various systems. Among the myriad tools available, NetworkX emerges as a powerful Python library designed to handle these intricate analyses with ease, especially when run on robust platforms like Linux. This article explores how to effectively use NetworkX for network analysis on a Linux environment, providing both foundational knowledge and practical applications.
Before diving into the world of network analysis, it’s essential to set up a conducive environment on a Linux system. Here’s a step-by-step guide to getting started:
Installing Linux: If you don’t have Linux installed, Ubuntu is a recommended distribution for beginners due to its user-friendly interface and extensive community support. You can download it from the official Ubuntu website and follow the installation guide to set it up on your machine.
Setting up Python and Pip: Most Linux distributions come with Python pre-installed. You can verify this by running python3 --version
in your terminal. If it’s not installed, you can install Python using your distribution’s package manager (e.g., sudo apt install python3
). Next, install pip, Python’s package manager, by running sudo apt install python3-pip
.
Installing NetworkX: With Python and pip ready, install NetworkX by running pip3 install networkx
. Optionally, install Matplotlib for visualizing networks (pip3 install matplotlib
).
Network analysis operates on networks, which are structures consisting of nodes (or vertices) connected by edges (or links). Here’s a breakdown of key concepts:
Mozilla Firefox 131 Brings Tab Hover Previews, URL Fragments + More [OMG! Ubuntu!]
Mozilla Firefox 131 is now available to download with a small set of improvements in tow. The first change I noticed when opening Firefox 131 is the new icon for the ‘all tabs’ feature1. Previously a small downward pointing arrow, this new—more obvious— icon is a small squarish depiction of a tabbed web browser. The change was made ahead of vertical tabs (upcoming feature) that moves this button to the toolbar if vertical tabs are enabled. Mozilla say “hovering the mouse over an unfocused tab will now display a visual preview of its contents”. These visual tab hover previews were […]
You're reading Mozilla Firefox 131 Brings Tab Hover Previews, URL Fragments + More, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Raspberry Pi’s New $70 AI Camera Works With All Pi Models [OMG! Ubuntu!]
If you’re looking to kick the tyres on AI image processing/recognition projects and own an older Raspberry Pi mode, the company’s new ‘AI Camera’ add-on will be of interest. Where the $70 Raspberry Pi AI Kit announced in June only works with a Raspberry Pi 5, the new $70 AI camera works with all Raspberry Pi boards that have the relevant camera connector port (spoiler: most, including the Raspberry Pi Zero and Raspberry Pi 400). This new AI Camera is the latest fruit from Raspberry Pi’s ongoing partnership with Sony Semiconductor Solutions, making use of the latter outfit’s IMX500 image […]
You're reading Raspberry Pi’s New $70 AI Camera Works With All Pi Models, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Real Python: A Guide to Modern Python String Formatting Tools [Planet Python]
When working with strings in Python, you may need to interpolate values into your string and format these values to create new strings dynamically. In modern Python, you have f-strings and the .format()
method to approach the tasks of interpolating and formatting strings.
In this tutorial, you’ll learn how to:
.format()
method for string interpolationTo get the most out of this tutorial, you should know the basics of Python programming and the string data type.
Get Your Code: Click here to download the free sample code that shows you how to use modern string formatting tools in Python.
Take the Quiz: Test your knowledge with our interactive “A Guide to Modern Python String Formatting Tools” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
A Guide to Modern Python String Formatting ToolsYou can take this quiz to test your understanding of modern tools for string formatting in Python. These tools include f-strings and the .format() method.
Python has developed different string interpolation and formatting tools over the years. If you’re getting started with Python and looking for a quick way to format your strings, then you should use Python’s f-strings.
Note: To learn more about string interpolation, check out the String Interpolation in Python: Exploring Available Tools tutorial.
If you need to work with older versions of Python or legacy code, then it’s a good idea to learn about the other formatting tools, such as the .format()
method.
In this tutorial, you’ll learn how to format your strings using f-strings and the .format()
method. You’ll start with f-strings to kick things off, which are quite popular in modern Python code.
Python has a string formatting tool called f-strings, which stands for formatted string literals. F-strings are string literals that you can create by prepending an f
or F
to the literal. They allow you to do string interpolation and formatting by inserting variables or expressions directly into the literal.
Here you’ll take a look at how you can create an f-string by prepending the string literal with an f
or F
:
👇
>>> f"Hello, Pythonista!"
'Hello, Pythonista!'
👇
>>> F"Hello, Pythonista!"
'Hello, Pythonista!'
Using either f
or F
has the same effect. However, it’s a more common practice to use a lowercase f
to create f-strings.
Just like with regular string literals, you can use single, double, or triple quotes to define an f-string:
👇
>>> f'Single-line f-string with single quotes'
'Single-line f-string with single quotes'
👇
>>> f"Single-line f-string with double quotes"
'Single-line f-string with single quotes'
👇
>>> f'''Multiline triple-quoted f-string
... with single quotes'''
'Multiline triple-quoted f-string\nwith single quotes'
👇
>>> f"""Multiline triple-quoted f-string
... with double quotes"""
'Multiline triple-quoted f-string\nwith double quotes'
Up to this point, your f-strings look pretty much the same as regular strings. However, if you create f-strings like those in the examples above, you’ll get complaints from your code linter if you have one.
The remarkable feature of f-strings is that you can embed Python variables or expressions directly inside them. To insert the variable or expression, you must use a replacement field, which you create using a pair of curly braces.
The variable that you insert in a replacement field is evaluated and converted to its string representation. The result is interpolated into the original string at the replacement field’s location:
>>> site = "Real Python"
👇
>>> f"Welcome to {site}!"
'Welcome to Real Python!'
In this example, you’ve interpolated the site
variable into your string. Note that Python treats anything outside the curly braces as a regular string.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Kushal Das: Thank you Gnome Nautilus scripts [Planet Python]
As I upload photos to various services, I generally resize them as required based on portrait or landscape mode. I used to do that for all the photos in a directory and then pick which ones to use. But, I wanted to do it selectively, open the photos in Gnome Nautilus (Files) application and right click and resize the ones I want.
This week I noticed that I can do that with scripts. Those can be in any given
language, the selected files will be passed as command line arguments, or full
paths will be there in an environment variable
NAUTILUS_SCRIPT_SELECTED_FILE_PATHS
joined via newline character.
To add any script to the right click menu, you just need to place them in
~/.local/share/nautilus/scripts/
directory. They will show up in the right click menu for scripts.
Below is the script I am using to reduce image sizes:
#!/usr/bin/env python3
import os
import sys
import subprocess
from PIL import Image
# paths = os.environ.get("NAUTILUS_SCRIPT_SELECTED_FILE_PATHS", "").split("\n")
paths = sys.argv[1:]
for fpath in paths:
if fpath.endswith(".jpg") or fpath.endswith(".jpeg"):
# Assume that is a photo
try:
img = Image.open(fpath)
# basename = os.path.basename(fpath)
basename = fpath
name, extension = os.path.splitext(basename)
new_name = f"{name}_ac{extension}"
w, h = img.size
# If w > h then it is a landscape photo
if w > h:
subprocess.check_call(["/usr/bin/magick", basename, "-resize", "1024x686", new_name])
else: # It is a portrait photo
subprocess.check_call(["/usr/bin/magick", basename, "-resize", "686x1024", new_name])
except:
# Don't care, continue
pass
You can see it in action (I selected the photos and right clicked, but the recording missed that part):
Real Python: Quiz: A Guide to Modern Python String Formatting Tools [Planet Python]
Test your understanding of Python’s tools for string formatting, including f-strings and the .format()
method.
Take this quiz after reading our A Guide to Modern Python String Formatting Tools tutorial.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Software Foundation: Python 3.13 and the Latest Trends: A Developer's Guide to 2025 - Live Stream Event [Planet Python]
Join Tania Allard, PSF Board Member, and Łukasz Langa, CPython Developer-in-Residence, for ‘Python 3.13 and the Latest Trends: A Developer’s Guide to 2025’, a live stream event hosted by Paul Everitt from JetBrains. Thank to JetBrains for partnering with us on the Python Developers Survey and this event to highlight the current state of Python!
The session will take place tomorrow, October 3, at 5:00 pm CEST (11:00 am EDT). Tania and Łukasz will be discussing the exciting new features in Python 3.13, plans for Python 3.15 and current Python trends gathered from the 2023 Annual Developers Survey. Don't miss this chance to hear directly from the experts behind Python’s development!
Don’t forget to enable YouTube notifications for the stream and mark your calendar.
PyCharm: Prompt AI Directly in the Editor [Planet Python]
With PyCharm, you now have the support of AI Assistant at your fingertips. You can interact with it right where you do most of your work – in the editor.
Stuck with an error in your code? Need to add documentation or tests? Just start typing your request on a new line in the editor, just as if you were typing in the AI Assistant chat window. PyCharm will automatically recognize your natural language request and generate a response.
PyCharm leaves a purple mark in the gutter next to lines changed by AI Assistant so you can easily see what has been updated.
If you don’t like the initial suggestion, you can generate a new one by pressing Tab
. You can also adjust the initial input by clicking on the purple block in the gutter or simply pressing Ctrl+/
or ⌘/
.
Want to get assistance with a specific argument? You can narrow the context that AI Assistant uses for its response as much as you want. Just put the caret in the relevant context, type the $
or ?
symbol, and start writing. PyCharm will recognize your prompt and take the current context into account for its suggestions.
The new inline AI assistance works for Python, JavaScript, TypeScript, JSON, and YAML file formats, while the option to narrow the context works only for Python so far.
This feature is available to all AI Assistant subscribers in the second PyCharm 2024.3 EAP build. You can get a free trial version of AI Assistant straight in the IDE: to enable AI Assistant, open a project in PyCharm, click the AI icon on the right-hand toolbar, and follow the instructions that appear.
William Minchin: u202410012332 [Planet Python]
Microblogging v1.3.0 for Pelican released! Posts should now sort as expected. Thanks @ashwinvis. on PyPI #Microblogging #Pelican Plugins #Releases #Python
Linux Mint Gives First Look at New Cinnamon Theme [OMG! Ubuntu!]
As revealed last month, Linux Mint is working on an improved default theme for the Cinnamon desktop – and today we got our first look at what’s coming. The way Cinnamon looks in Linux Mint (the distribution) is not the way it looks if you install the Cinnamon desktop yourself on a different distro. There, assuming a theme pack is isn’t pulled in as a dependency, you’ll see the default built-in Cinnamon theme. And it’s that built-in theme that Linux Mint is currently improving. Mint says “the new default theme [is] much darker and contrasted than before. Objects are rounded […]
You're reading Linux Mint Gives First Look at New Cinnamon Theme, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
PyCoder’s Weekly: Issue #649 (Oct. 1, 2024) [Planet Python]
#649 – OCTOBER 1, 2024
View in Browser »
In this tutorial, you’ll learn about the new features in Python 3.13. You’ll take a tour of the new REPL and error messages and see how you can try out the experimental free threading and JIT versions of Python 3.13 yourself.
REAL PYTHON
Some last minute performance considerations are delaying the release of Python 3.13 with one of the features being backed out. The new target is next week.
PYTHON.ORG
pdb
and breakpoint()
Python ships with a command-line based debugger called pdb
. To set a breakpoint, you call the breakpoint()
function in your code. This post introduces you to pdb
and debugging from the command-line.
JUHA-MATTI SANTALA
Don’t miss out on your chance to register for DevSecCon 2024! From the exciting lineup of 20+ sessions, here’s one that you can’t skip: Ali Diamond, from Hak5: “I’m A Software Engineer, and I Have to Make Bad Security Decisions—why?” Save your spot →
SNYK.IO sponsor
Looking to experiment or build your portfolio? Discover creative Django project ideas for all skill levels, from beginner apps to advanced full-stack projects.
EVGENIA VERBINA
In this tutorial, you’ll explore one of Python 3.13’s new features: a new and modern interactive interpreter, also known as a REPL.
REAL PYTHON
This post talks about the pros and cons of upgrading to Python 3.13 and why you might do it immediately or wait for the first patch release in December.
ITAMAR TURNER-TRAURING
Jack was toying around with a refactor where he wanted to replace a variable name across a large number of files. His usual tools of grep
and sed
weren’t sufficient, so he tried tree-sitter instead. Associated HN Discussion.
JACK EVANS
Information retrieval often uses a two-stage pipeline, where the first stage does a quick pass and the second re-ranks the content. This post talks about re-ranking, the different methods out there, and introduces a Python library to help you out.
BENJAMIN CLAVIE
A code contract is a way of specifying how your code is supposed to perform. They can be useful for tests and to generally reduce the number of bugs in your code. This article introduces you to the concept and the dbc
library.
LÉO GERMOND
Technical debt is the accumulation of design decisions that eventually slow teams down. This post talks about two ways to pay it down: using tech debt payments to get into the flow, and what you need before doing a big re-write.
GERGELY OROSZ
gather()
in the Background The asyncio.gather()
method works as the meeting point for multiple co-routines, but it doesn’t have to be a synchronous call. This post teaches you how to use .gather()
in the background.
JASON BROWNLEE
import
Techniques The Python import system is as powerful as it is useful. In this in-depth video course, you’ll learn how to harness this power to improve the structure and maintainability of your code.
REAL PYTHON course
Ryan just finished his second round mentoring with the Djangonaut.Space program. This post talks about both how you can help your mentor help you, and how to be a good mentor.
RYAN CHELEY
__new__
The dunder method __new__
is used to customise object creation and is a core stepping stone in understanding metaprogramming in Python.
RODRIGO GIRÃO SERRÃO
This short post shows you how to prompt your users for input with Python’s built-in input()
function.
TREY HUNNER
Talk Python interviews Anna-Lena Popkes and they talk about how and when to teach coding to children.
TALK PYTHON podcast
October 2, 2024
REALPYTHON.COM
October 3 to October 5, 2024
PYCON.ORG
October 3, 2024
MEETUP.COM
October 3, 2024
SYPY.ORG
October 4 to October 6, 2024
PYCON.ORG
October 4 to October 5, 2024
DJANGODAY.DK
October 9 to October 14, 2024
PYCON.ORG
October 10 to October 11, 2024
PYCON.ORG
Happy Pythoning!
This was PyCoder’s Weekly Issue #649.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
PyCharm: Python 3.13 and the Latest Trends: A Developer’s Guide to 2025 [Planet Python]
We invite you to join us in just two days time, on October 3 at 5:00 pm CEST (11:00 am EDT), for a livestream shining a spotlight on Python 3.13 and the trends shaping its development.
Our speakers:
They will discuss the most notable features of Python 3.13 and examine the industry trends likely to influence its future. This is a great opportunity to get ahead of the release and ask your questions directly to the experts.
Don’t forget to enable YouTube notifications and mark your calendar.
PyCharm: PyCharm’s Interactive Tables for Data Science [Planet Python]
Data cleaning, exploration, and visualization are some of the most time-consuming tasks for data scientists. Nearly 50% of data specialists dedicate 30% or more of their time to data preparation. The pandas and Polars libraries are widely used for these purposes, each offering unique advantages. PyCharm supports both libraries, enabling users to efficiently explore, clean, and visualize data, even with large datasets.
In this blog post, you’ll discover how PyCharm’s interactive tables can enhance your productivity when working with either Polars or pandas. You will also learn how to perform many different data exploration tasks without writing any code and how to use JetBrains AI Assistant for data analysis.
To start using pandas for data analysis, import the library and load data from a file using pd.read_csv(“FileName”), or drag and drop a CSV file into a Jupyter notebook. If you’re using Polars, import the library and use pl.read_csv(“FileName/path to the file”) to load data into a DataFrame. Then, print the dataset just by using the name of the variable.
Interactive tables offer a wide range of features that allow you to easily explore your data. For example, you can navigate through your data with infinite horizontal and vertical scrolling, use single and multiple column sorting, and many other features.
This feature allows you to sort columns alphabetically or maintain the existing column order. You can also find specific columns by typing the column name in the Column List menu. Through the context menu or Column List, you can selectively hide or display columns. For deeper analysis, you can hide all but the essential columns or use the Hide Other Columns option to focus on a single column.
Finally, you can open your dataframe in a separate window for even more in-depth analysis.
You can easily understand data types directly from column headers. For example, is used for a data type object, while indicates numeric data.
Additionally, you can access descriptive statistics by hovering over column headers in Compact mode or view them directly in Detailed mode, where distribution histograms are also available.
Interactive tables also offer several features available in the Chart view section.
You can access the AI Assistant in the upper-left corner of the tables for the following purposes:
Exploratory Data Analysis (EDA) is a crucial step in data science, as it allows data scientists to understand the underlying structure and patterns within a dataset before applying any modeling techniques. EDA helps you identify anomalies, detect outliers, and uncover relationships among variables – all of which are essential for making informed decisions.
Interactive tables offer many features that allow you to explore your data faster and get reliable results.
Let’s look at a real-life example of how the tables could boost the productivity of your EDA. For this example, we will use the Bengaluru House Dataset. Normally we start with an overview of our data. This includes just viewing it to understand the size of the dataset, data types of the columns, and so on. While you can certainly do this with the help of code, using interactive tables allows you to get this data without code. So, in our example, the size of the dataset is 13,320 rows and 9 columns, as you can see in the table header.
Our dataset also contains different data types, including numeric and string data. This means we can use different techniques for working with data, including correlation analysis and others.
And of course you can take a look at the data with the help of infinite scrolling and other features we mentioned above.
After getting acquainted with the data, the next step might be more in-depth analysis of the statistics. PyCharm provides a lot of important information about the columns in the table headers, including missing data, mode, mean, median, and so on.
For example, here we see that many columns have missing data. In the “bath” column, we obviously have an outlier, as the max value significantly exceeds the 95th percentile.
Additionally, data type mismatches, such as “total_sqft” not being a float or integer, indicate inconsistencies that could impact data processing and analysis.
After sorting, we notice one possible reason for the problem: the use of text values in data and ranges instead of normal numerical values.
Additionally, if our dataset doesn’t have hundreds of columns, we can use the help of AI Assistant and ask it to explain the DataFrame. From there, we can prompt it with any important questions, such as “What data problems in the dataset should be addressed and how?”
In some cases, data visualization can help you understand your data. PyCharm interactive tables provide two options for that. The first is Chart View and the second is Generate Visualizations in Chat.
Let’s say my hypothesis is that the price of a house should be correlated with its total floor area. In other words, the bigger a house is, the more expensive it should be. In this case, I can use a scatter plot in Chart View and discover that my hypothesis is likely correct.
PyCharm Professional’s interactive tables offer numerous benefits that significantly boost your productivity in data exploration and data cleaning. The tables allow you to work with the most popular data science library, pandas, and the fast-growing framework Polars, without writing any code. This is because the tables provide features like browsing, sorting, and viewing datasets; code-free visualizations; and AI-assisted insights.
Interactive tables in PyCharm not only save your time but also reduce the complexity of data manipulation tasks, allowing you to focus on deriving meaningful insights instead of writing boilerplate code for basic tasks.
Download PyCharm Professional and get an extended 60-day trial by using the promo code “PyCharmNotebooks”. The free subscription is available for individual users only.
For more information on interactive tables in PyCharm, check out our related blogs, guides, and documentation:
Real Python: Differences Between Python's Mutable and Immutable Types [Planet Python]
As a Python developer, you’ll have to deal with mutable and immutable objects sooner or later. Mutable objects are those that allow you to change their value or data in place without affecting the object’s identity. In contrast, immutable objects don’t allow this kind of operation. You’ll just have the option of creating new objects of the same type with different values.
In Python, mutability is a characteristic that may profoundly influence your decision when choosing which data type to use in solving a given programming problem. Therefore, you need to know how mutable and immutable objects work in Python.
In this video course, you’ll:
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Insider: Python 3.12.7 released [Planet Python]
I'm pleased to announce the release of Python 3.12.7:
https://www.python.org/downloads/release/python-3127/
Python 3.12 is the newest major release of the Python programming language, and it contains many new features and optimizations. 3.12.7 is the latest maintenance release, containing more than 100 bugfixes, build improvements and documentation changes since 3.12.6.
perf
profiler to report Python function names in traces.wstr
and wstr_length
members of the C implementation of unicode objects were removed, per PEP 623.unittest
module, a number of long deprecated methods and classes were removed. (They had been deprecated since Python 3.1 or 3.2).smtpd
and distutils
modules have been removed (see PEP 594 and PEP 632. The setuptools
package continues to provide the distutils
module.SyntaxWarning
instead of DeprecationWarning
, making them more visible. (They will become syntax errors in the future.)For more details on the changes to Python 3.12, see What’s new in Python 3.12.
Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organization contributions to the Python Software Foundation.
Your release team,
Thomas Wouters
Łukasz Langa
Ned Deily
Steve Dower
Mission Center (Linux System Monitor) Now Reports Fan Info [OMG! Ubuntu!]
A major new release of Mission Center, a modern system monitor app for Linux desktops, has been released. Fans of this Rust-based GTK4/libadwaita system monitoring tool (which to address the recurring elephant in the room does indeed have a user interface inspired by—now I’d argue superior to—the Windows system monitor app) will find a lot to like in the latest update. I’m not going to recap all of this tool’s existing features in this post as I’ve covered this app a few times in the past. The Mission Center homepage has more details for the uninitiated. Instead, I’m going focus […]
You're reading Mission Center (Linux System Monitor) Now Reports Fan Info, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Robin Wilson: I won two British Cartographic Society awards! [Planet Python]
It’s been a while since I posted here – I kind of lost momentum over the summer (which is a busy time with a school-aged child) and never really picked it up again.
Anyway, I wanted to write a quick post to tell people that I won two awards at the British Cartographic Society awards ceremony a few weeks ago.
They were both for my British Placename Mapper web app, which is described in more detail in this blog post. If you haven’t seen it already, I strongly recommend you check it out.
I won a Highly Commended certificate in the Avenza Award for Electronic Mapping, and the First Prize trophy for the Ordnance Survey Award (for any map using OS data).
The certificates came in a lovely frame, and the trophy is enormous – about 30cm high and weighing over 3kg!
I was presented with the trophy at the BCS Annual Conference in London, but they very kindly offered to keep the trophy to save me carrying it across London on my wheelchair and back on the train, so they invited me to Ordnance Survey last week to be presented with it again. I had a lovely time at OS – including 30 minutes with their Director General/CEO and was formally presented with my trophy again (standing in front of the first ever Ordnance Survey map!):
Full information on the BCS awards are available on their website and I strongly recommend submitting any appropriate maps you’ve made for next year’s awards. I need to get my thinking cap on for next year’s entry…
Arch Linux Is Now Working Directly With Valve [Slashdot: Linux]
The Arch Linux team has announced a collaboration with Valve, working to support critical infrastructure projects like a build service and secure signing enclave for the Arch Linux distribution. Tom's Hardware reports: If you're familiar with Valve and Steam Deck, you may already know that the Deck uses SteamOS 3, which is built on top of Arch Linux. Thanks to the Arch Linux base and Valve's development of the Proton compatibility layer for playing Windows games on Linux, we now have a far improved Linux gaming scene, especially on Valve's Steam Deck and Deck OLED handhelds. While Valve's specific reasons for picking Arch Linux for Steam Deck remain unknown, it's pretty easy to guess why it was picked. Mainly, it's a particularly lightweight distribution maintained since March 2002, which lends itself well to gaming with minimal performance overhead. A more intensive Linux distribution may not have been the ideal base for SteamOS 3, which is targeted at handhelds like Steam Deck first. As primary Arch Linux developer Levente Polyak discloses in the announcement post, "Valve is generously providing backing for two critical projects that will have a huge impact on our distribution: a build service infrastructure and a secure signing enclave. By supporting work on a freelance basis for these topics, Valve enables us to work on them without being limited solely by the free time of our volunteers." Polyak continues, "This opportunity allows us to address some of the biggest outstanding challenges we have been facing for a while. The collaboration will speed up the progress that would otherwise take much longer for us to achieve, and will ultimately unblock us from finally pursuing some of our planned endeavors [...] We believe this collaboration will greatly benefit Arch Linux, and are looking forward to share further development on the mailing list as work progresses."
Read more of this story at Slashdot.
VirtualBox 7.1.2 Adds Support for 3D Acceleration in ARM VMs [OMG! Ubuntu!]
Oracle has release a new maintenance update for VirtualBox, its open-source virtualisation software. VirtualBox 7.1.2 is the first such point release since the VirtualBox 7.1 series debuted earlier this month. Naturally, it builds on that major release with a flurry of big fixes, performance finesse, and UI refinements, and adds a few new features. Among them, the latest version adds support for a multi-window layout, gives users the option to choose remote display security method, and fixes for a 3D acceleration-related quirks, including black screens in Windows VMs and minor rendering issues. A bug fixes ensures virtual machines created using […]
You're reading VirtualBox 7.1.2 Adds Support for 3D Acceleration in ARM VMs, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Ubuntu Patches ‘Severe’ Security Flaw in CUPS [OMG! Ubuntu!]
If you’ve cast a half-glazed eye over Linux social media feeds at some point in the past few days you may have caught wind that a huge Linux security flaw was about to be disclosed. And today it was: a remote code execution flaw affecting the CUPS printing stack used in most major desktop Linux distributions (including Ubuntu, and also Chrome OS). With a severity score of 9.9 it’s right at the edge of the most severe vulnerabilities possible. The CUPS Security Vulnerability Canonical explains in its security blog: “At its core, the vulnerability is exploited by tricking CUPS into […]
You're reading Ubuntu Patches ‘Severe’ Security Flaw in CUPS, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
COSMIC DE Alpha 2 Released, This is What’s New [OMG! Ubuntu!]
Chocks away —British saying, don’t stare at me weirdly— as the second alpha of System76’s homegrown COSMIC desktop environment has been released. To make it easy for us all to try out the latest improvements a second alpha build of Pop!_OS 24.04 is also available to download. Those who installed the first Pop!_OS 24.04 alpha don’t need to re-install. All of the improvements in this post are available as software updates via the COSMIC App Store. Not that anyone needs to use Pop!_OS to try the COSMIC. This Rust-based DE is also available to test on a wide range of […]
You're reading COSMIC DE Alpha 2 Released, This is What’s New, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Ubuntu 24.10 ARM ISO Supports the ThinkPad X13s [OMG! Ubuntu!]
Ubuntu 24.10 supports the Snapdragon-powered Lenovo ThinkPad X13s laptop in the official ‘generic’ ARM64 ISO — a notable change. Although it is possible to use Ubuntu 23.10 on the Thinkpad X13s it requires using of a custom ISO spun-up specifically for this device. Ubuntu 24.04 LTS had no official installer image for this device (it is possible to upgrade to 24.04 from 23.10, albeit with caveats). But with the arrival of Ubuntu 24.10 in October, the standard Ubuntu ARM64 ISO (which works much like a regular Intel/AMD ISO, with a live session and guided installer) will happily boot on this […]
You're reading Ubuntu 24.10 ARM ISO Supports the ThinkPad X13s, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
How I Booted Linux On an Intel 4004 from 1971 [Slashdot: Linux]
Long-time Slashdot reader dmitrygr writes: Debian Linux booted on a 4-bit intel microprocessor from 1971 — the first microprocessor in the world — the 4004. It is not fast, but it is a real Linux kernel with a Debian rootfs on a real board whose only CPU is a real intel 4004 from the 1970s. There's a detailed blog post about the experiment. (Its title? "Slowly booting full Linux on the intel 4004 for fun, art, and absolutely no profit.") In the post dmitrygr describes testing speed optimizations with an emulator where "my initial goal was to get the boot time under a week..."
Read more of this story at Slashdot.
Unlock Your Creativity: Building and Testing Websites in the Ubuntu Web Development Playground [Linux Journal - The Original Magazine of the Linux Community]
Ubuntu stands out as one of the most popular Linux distributions among web developers due to its stability, extensive community support, and robust package management. This article dives into creating a dedicated web development environment in Ubuntu, guiding you from the initial system setup to deploying and maintaining your websites.
Before diving into web development, ensure your Ubuntu installation is up to date. Ubuntu can run on a variety of hardware, but for a smooth development experience, a minimum of 4GB RAM and 25GB of available disk space is recommended. After installing Ubuntu, update your system:
sudo apt update && sudo apt upgrade
Web development typically involves a stack of software that includes a web server, a database system, and programming languages. Install the LAMP (Linux, Apache, MySQL, PHP) stack using:
sudo apt install apache2 mysql-server php libapache2-mod-php php-mysql
For JavaScript development, install Node.js and npm:
sudo apt install nodejs npm
Choose an editor that enhances your coding efficiency. Popular choices include:
Apache and Nginx are the most popular web servers. Apache is generally easier to configure for beginners:
sudo systemctl start apache2 sudo systemctl enable apache2
Nginx, alternatively, offers high performance and low resource consumption:
sudo apt install nginx sudo systemctl start nginx sudo systemctl enable nginx
Configure PHP by adjusting settings in php.ini
to suit your development needs, often found in /etc/php/7.4/apache2/php.ini
. Python and other languages can be set up similarly, ensuring they are properly integrated with your web server.
Docker and Kubernetes revolutionize development by isolating environments and streamlining deployment:
Tor Project Merges With Tails [Slashdot: Linux]
The Tor Project: Today the Tor Project, a global non-profit developing tools for online privacy and anonymity, and Tails, a portable operating system that uses Tor to protect users from digital surveillance, have joined forces and merged operations. Incorporating Tails into the Tor Project's structure allows for easier collaboration, better sustainability, reduced overhead, and expanded training and outreach programs to counter a larger number of digital threats. In short, coming together will strengthen both organizations' ability to protect people worldwide from surveillance and censorship. Countering the threat of global mass surveillance and censorship to a free Internet, Tor and Tails provide essential tools to help people around the world stay safe online. By joining forces, these two privacy advocates will pool their resources to focus on what matters most: ensuring that activists, journalists, other at-risk and everyday users will have access to improved digital security tools. In late 2023, Tails approached the Tor Project with the idea of merging operations. Tails had outgrown its existing structure. Rather than expanding Tails's operational capacity on their own and putting more stress on Tails workers, merging with the Tor Project, with its larger and established operational framework, offered a solution. By joining forces, the Tails team can now focus on their core mission of maintaining and improving Tails OS, exploring more and complementary use cases while benefiting from the larger organizational structure of The Tor Project. This solution is a natural outcome of the Tor Project and Tails' shared history of collaboration and solidarity. 15 years ago, Tails' first release was announced on a Tor mailing list, Tor and Tails developers have been collaborating closely since 2015, and more recently Tails has been a sub-grantee of Tor. For Tails, it felt obvious that if they were to approach a bigger organization with the possibility of merging, it would be the Tor Project.
Read more of this story at Slashdot.
How to Disable the ‘Recent’ Files Section in Nautilus [OMG! Ubuntu!]
There’s one feature in the Nautilus file manager I use daily: the Recent files shortcut. One-click brings up a pseudo-folder showing all of my recently downloaded, modified, and newly created files, regardless of which folders they’re in. I find this grouping dead handy – but I accept it’s also dead revealing too. Which is why not everyone likes this feature. Individual files can be hidden from view manually, but that’s effort. Since ‘Recent’ is pinned at the top of the sidebar, it’s easy to accidentally click it. Not an issue for most of us at home, but for those in […]
You're reading How to Disable the ‘Recent’ Files Section in Nautilus, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
See Real-Time Power Usage (in Watts) in Ubuntu’s Top Panel [OMG! Ubuntu!]
If you’re looking for a no-fuss way to see real-time energy consumption on your Ubuntu laptop as you use it, a new GNOME Shell extension makes this deliciously easy. “Why would I want to see energy usage?” – anyone asking that question probably doesn’t. This is more for the curious folk; those keen to reveal the relative power demands of the software they run, the tasks they perform, they hardware settings they use, and the devices they connect – more of an educational tool than an essential one. Of course, you can monitor power consumption on Linux without any extension. […]
You're reading See Real-Time Power Usage (in Watts) in Ubuntu’s Top Panel, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Harnessing the Power of Linux to Drive Innovations in Neuroscience Research [Linux Journal - The Original Magazine of the Linux Community]
The world of scientific computing has consistently leaned on robust, flexible operating systems to handle the demanding nature of research tasks. Linux, with its roots deeply embedded in the realms of free and open-source software, stands out as a powerhouse for computational tasks, especially in disciplines that require extensive data processing and modeling, such as neuroscience. This article delves into how Linux not only supports but significantly enhances neuroscience research, enabling breakthroughs that might not be as feasible with other operating systems.
Linux is not just an operating system; it's a foundation for innovation, particularly in scientific research. Its design principles — stability, performance, and adaptability — make it an ideal choice for the computational demands of modern science. Globally, research institutions and computational labs have adopted Linux due to its superior handling of complex calculations and vast networks of data-processing operations.
One of the most compelling features of Linux is its open-source nature, which allows researchers to inspect, modify, and enhance the source code to suit their specific needs. This transparency is crucial in neuroscience, where researchers often need to tweak algorithms or simulations to reflect the complexity of neural processes accurately.
Collaborative Environment: The ability to share improvements and innovations without licensing restrictions fosters a collaborative environment where researchers worldwide can build upon each other's work. This is particularly valuable in neuroscience, where collective advancements can lead to quicker breakthroughs in understanding neurological disorders.
Customization and Innovation: Researchers can develop and share custom-tailored solutions, such as neural network simulations and data analysis tools, without the constraints of commercial software licenses.
Linux offers unparalleled control over system operations, allowing researchers to optimize their computing environment down to the kernel level.
Custom Kernels: Neuroscience researchers can benefit from custom kernels that are optimized for tasks such as real-time data processing from neuroimaging equipment or managing large-scale neural simulations.
Performance Optimization: Linux allows the adjustment of system priorities to favor computation-heavy processes, crucial for running extensive simulations overnight or processing large datasets without interruption.
Critical Unauthenticated RCE Flaw Impacts All GNU/Linux Systems [Slashdot: Linux]
"Looks like there's a storm brewing, and it's not good news," writes ancient Slashdot reader jd. "Whether or not the bugs are classically security defects or not, this is extremely bad PR for the Linux and Open Source community. It's not clear from the article whether this affects other Open Source projects, such as FreeBSD." From a report: A critical unauthenticated Remote Code Execution (RCE) vulnerability has been discovered, impacting all GNU/Linux systems. As per agreements with developers, the flaw, which has existed for over a decade, will be fully disclosed in less than two weeks. Despite the severity of the issue, no Common Vulnerabilities and Exposures (CVE) identifiers have been assigned yet, although experts suggest there should be at least three to six. Leading Linux distributors such as Canonical and RedHat have confirmed the flaw's severity, rating it 9.9 out of 10. This indicates the potential for catastrophic damage if exploited. However, despite this acknowledgment, no working fix is still available. Developers remain embroiled in debates over whether some aspects of the vulnerability impact security.
Read more of this story at Slashdot.
The Document Foundation announces the LibreOffice and Open Source Conference 2024 [Press Releases Archives - The Document Foundation Blog]
Berlin, 25 September 2024 – The LibreOffice and Open Source Conference 2024 will take place in Luxembourg from the 10 to the 12 October 2024. It will be hosted by the Digital Learning Hub and the local campus of 42 Luxembourg at the Terres Rouges buildings in Belval, Esch-sur-Alzette.
This is a key event that brings together the LibreOffice community – supporting the leading FOSS office suite – with a large number of stakeholders: large open source projects, international organizations and representatives from EU institutions and European government departments.
Organized in partnership with the Luxembourg Media & Digital Design Centre (LMDDC), which will host the EdTech track, the event is sponsored by allotropia and Collabora, the two companies contributing more actively to the development of LibreOffice; Passbolt, the Luxembourg made open source password manager for teams; and the Interdisciplinary Centre for Security, Reliability and Trust (SnT) of the University of Luxembourg.
In addition, local partners such as Luxembourg Convention Bureau, LIST, LU-CIX and Luxembourg House of Cybersecurity are supporting the organization of various aspects of the conference.
After the opening session in the morning of the 10 October, which includes institutional presentations from the Minister for Digitalisation, the Ministry of the Economy and the European Commission’s OSPO, there will be tracks about LibreOffice covering development, quality, security, documentation, localization, marketing and enterprise deployments, and tracks about open source covering technologies in education, OSS applications and cybersecurity. Another session will focus on OSPOs (Open Source Programme Officers).
The LibreOffice and Open Source Conference Luxembourg 2024 provides a platform to discuss the latest technical developments, community contributions, and the challenges facing open source software and communities of which TDF, LibreOffice and its community are important components. Professionals, developers, volunteers and users from various fields will share their experiences and collaborate on the future direction of the leading office suite.
Policy and decision makers will find counterparts from all over Europe with which they will be able to exchange ideas and experiences that will help them to promote and implement open source software in public, education and private sector organizations.
On 11 and 12 October, there will also be workshops focusing on different aspects of LibreOffice development, targeted to undergraduate Computer Science students or anyone who knows programming, and wants to become familiar with a large scale real world open source software project. To be able to better support the participants we limited the number of seats to 20 so register for the workshops as soon as possible to reserve your place.
Everyone is encouraged to register and participate in the conference to engage with the open source community, learn from different experts and contribute to meaningful discussions. Please note that, to avoid waste, we will plan for food, drinks and other free items for registered attendees so help us to cater for your needs by registering in time.
Ubuntu 24.10 Beta Released, Available to Download [OMG! Ubuntu!]
A beta of Ubuntu 24.10 ‘Oracular Oriole’ is now available to download – a day later than planned! Developers and non-developers alike can download the beta to try the new features in Ubuntu 24.10, check compatibility, and flag any issue for fixing before the stable release takes flight next month. “The Beta images are known to be reasonably free of showstopper image build or installer bugs, while representing a very recent snapshot of 24.10 that should be representative [the final release]”, says Canonical’s Utkarsh Gupta. This is the only beta release planned, though a release candidate will follow in few […]
You're reading Ubuntu 24.10 Beta Released, Available to Download, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Vivaldi Web Browser is Now Available as a Snap [OMG! Ubuntu!]
Vivaldi web browser has arrived on the Canonical Snap Store – officially. This closed-source, Chromium-based web browser has been available on Linux since its debut in 2015, providing an official DEB package for Ubuntu users (which adds an APT repo for ongoing updates). And last year it became possible to get Vivaldi on Flathub – though that Flatpak build is only semi-official: maintained and packaged by a Vivaldi engineer, but not a recommended or supported package by Vivaldi itself – not yet, anyway! So to hear Vivaldi is embracing the Snap format is an interesting, albeit not surprising, move. It’s […]
You're reading Vivaldi Web Browser is Now Available as a Snap, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.
Torvalds Weighs in On 'Nasty' Rust vs C For Linux Debate [Slashdot: Linux]
The Rust vs C battle raging in Linux circles has left even Linus Torvalds scratching his head. "I'm not sure why Rust has been such a contentious area," the Linux creator mused at this week's Open Source Summit, likening the fervor to ancient text editor wars. "It reminds me of when I was young and people were arguing about vi versus Emacs." The spat over integrating Rust into Linux has been brewing since 2022, with critics slamming it as an "insult" to decades of kernel work. One maintainer recently quit, fed up with the "nontechnical nonsense." Torvalds struck a surprisingly diplomatic tone. He praised how Rust has "livened up discussions" while admitting some arguments get "nasty." "C is, in the end, a very simple language," Torvalds said, explaining its appeal and pitfalls. "Because it's simple it's also very easy to make mistakes. And Rust is not." Torvalds remains upbeat about Rust's future in Linux, nonetheless. "Even if it were to become a failure -- and I don't think it will -- that's how you learn," he said.
Read more of this story at Slashdot.
20 Years Later, Real-Time Linux Makes It To the Kernel [Slashdot: Linux]
ZDNet's Steven Vaughan-Nichols reports: After 20 years, Real-Time Linux (PREEMPT_RT) is finally -- finally -- in the mainline kernel. Linus Torvalds blessed the code while he was at Open Source Summit Europe. [...] The real-time Linux code is now baked into all Linux distros as of the forthcoming Linux 6.12 kernel. This means Linux will soon start appearing in more mission-critical devices and industrial hardware. But it took its sweet time getting here. An RTOS is a specialized operating system designed to handle time-critical tasks with precision and reliability. Unlike general-purpose operating systems like Windows or macOS, an RTOS is built to respond to events and process data within strict time constraints, often measured in milliseconds or microseconds. As Steven Rostedt, a prominent real-time Linux developer and Google engineer, put it, "Real-time is the fastest worst-case scenario." He means that the essential characteristic of an RTOS is its deterministic behavior. An RTOS guarantees that critical tasks will be completed within specified deadlines. [...] So, why is Real-Time Linux only now completely blessed in the kernel? "We actually would not push something up unless we thought it was ready," Rostedt explained. "Almost everything was usually rewritten at least three times before it went into mainline because we had such a high bar for what would go in." In addition, the path to the mainline wasn't just about technical challenges. Politics and perception also played a role. "In the beginning, we couldn't even mention real-time," Rostedt recalled. "Everyone said, 'Oh, we don't care about real-time.'" Another problem was money. For many years funding for real-time Linux was erratic. In 2015, the Linux Foundation established the Real-Time Linux (RTL) collaborative project to coordinate efforts around mainlining PREEMPT_RT. The final hurdle for full integration was reworking the kernel's print_k function, a critical debugging tool dating back to 1991. Torvalds was particularly protective of print_k --He wrote the original code and still uses it for debugging. However, print_k also puts a hard delay in a Linux program whenever it's called. That kind of slowdown is unacceptable in real-time systems. Rostedt explained: "Print_k has a thousand hacks to handle a thousand different situations. Whenever we modified print_k to do something, it would break one of these cases. The thing about print_k that's great about debugging is you can know exactly where you were when a process crashed. When I would be hammering the system really, really hard, and the latency was mostly around maybe 30 microseconds, and then suddenly it would jump to five milliseconds." That delay was the print_k message. After much work, many heated discussions, and several rejected proposals, a compromise was reached earlier this year. Torvalds is happy, the real-time Linux developers are happy, print_K users are happy, and, at long last, real-time Linux is real.
Read more of this story at Slashdot.
Linus Torvalds Muses About Maintainer Gray Hairs, Next 'King of Linux' [Slashdot: Linux]
An anonymous reader quotes a report from ZDNet, written by Steven Vaughan-Nichols: In a candid keynote chat at the Linux Foundation's Open Source Summit Europe, Linux creator Linus Torvalds shared his thoughts on kernel development, the integration of Rust, and the future of open source. Dirk Hohndel, Verizon's Open Source Program Office head and Torvalds friend, moderated their conversation about the Linux ecosystem. Torvalds emphasized that kernel releases, like the recent 6.11 version, are intentionally not exciting. "For almost 15 years, we've had a very good regular cadence of releases," he explained. With releases every nine weeks, this regularity aims for timeliness and reliability rather than flashy new features. The Linux creator noted that while drivers still make up the bulk of changes, core kernel development continues to evolve. "I'm still surprised that we're doing very core development," Torvalds said, mentioning ongoing work in virtual file systems and memory management. [...] Shifting back to another contentious subject -- maintainer burnout and succession planning -- Hohndel observed that "maintainers are aging. Strangely, some of us have, you know, not quite as much or the right hair color anymore." (Torvalds interjected that "gray is the right color.") Hohndel continued, "So the question that I always ask myself: Is it about time to talk about there being a mini-Linus?" Torvalds turned the question around. True, the Linux maintainers are getting older and people do burn out and go away. "But that's kind of normal. What is not normal is that people actually stay around for decades. That's the unusual thing, and I think that's a good sign." At the same time, Torvalds admitted, it can be intimidating for a younger developer to join the Linux kernel team "when you see all these people who have been around for decades, but at the same time, we have many new developers. Some of those new developers come in, and three years later, they are top maintainers." Hohndel noted that "to be the king of Linux, the main maintainer, you have to have a lot of experience. And the backup right now is Greg KH (Greg Kroah-Hartman, maintainer of the stable Linux kernel), who is about the same age as we are and has even less hair." True, Torvalds responded, "But the thing is, Greg hasn't always been Greg. Before Greg, there's been Andrew {Morton) and Alan (Cox). After Greg, there will be Shannon and Steve. The real issue is you have to have a person or a group of people that the development community can trust, and part of trust is fundamentally about having been around for long enough that people know how you work, but long enough does not mean to be 30 years." Hohndel made one last comment: "What I'm trying to say is, you've been doing this for 33 years. I don't want to be morbid, but I think in 33 years, you may no longer be doing this?" Torvalds, making motions as though he was using a walker, replied, "I would love to still do this conference with you." The report notes the contention around the integration of Rust, highlighted by the recent departure of Rust for Linux maintainer Wedson Filho. Despite resistance from some devs who prefer C and are skeptical of Rust, Torvalds remains optimistic about Rust's future in the kernel. He said: "Rust is a very different thing, and there are a lot of people who are used to the C model. They don't like the differences, but that's OK. In the kernel itself, absolutely nobody understands everything. I don't. I rely heavily on maintainers of various subsystems. I think the same can be true of Rust and C. I think it's one of our strengths in the kernel that we can specialize. Clearly, some people just don't like the notion of Rust and having Rust encroach on their area. But we've only been doing Rust for a couple of years, so it's way too early to say Rust is a failure." Meanwhile, Torvalds confirmed that the long-anticipated real-time Linux (RTLinux) project will finally be integrated into the kernel with the upcoming release of Linux 6.12.
Read more of this story at Slashdot.
Linux Kernel 6.11 is Out [Slashdot: Linux]
Linux creator Linus Torvalds has released version 6.11 of the open-source operating system kernel. The new release, while not considered major by Torvalds, introduces several notable improvements for AMD hardware users and Arch Linux developers. ZDNet: This latest version introduces several enhancements, particularly for AMD hardware users, while offering broader system improvements and new capabilities. These include: RDNA4 Graphics Support: The kernel now includes baseline support for AMD's upcoming RDNA4 graphics architecture. This early integration bodes well for future AMD GPU releases, ensuring Linux users have day-one support. Core Performance Boost: The AMD P-State driver now includes handling for AMD Core Performance Boost. This driver gives AMD Core users more granular control over turbo and boost frequency ranges. Fast Collaborative Processor Performance Control (CPPC) Support: Overclockers who want the most power possible from their computers will be happy with this improvement to the AMD P-State driver. This feature enhances power efficiency on recent Ryzen (Zen 4) mobile processors. This can improve performance by 2-6% without increasing power consumption. AES-GCM Crypto Performance: AMD and Intel CPUs benefit from significantly faster AES-GCM encryption and decryption processing, up to 160% faster than previous versions.
Read more of this story at Slashdot.
Linux Developer Swatted and Handcuffed During Live Video Stream [Slashdot: Linux]
Last October Slashdot reported on René Rebe's discovery of a random illegal instruction speculation bug on AMD Ryzen 7000-series and Epyc Zen 4 CPUs — which Rebe discussed on his YouTube channel. But this week's YouTube episode had a different ending, reports Tom's Hardware... Two days ago, tech streamer and host of Code Therapy René Rebe was streaming one of many T2 Linux (his own custom distribution) development sessions from his office in Germany when he abruptly had to remove his microphone and walk off camera due to the arrival of police officers. The officers subsequently cuffed him and took him to the station for an hour of questioning, a span of time during which the stream continued to run until he made it back... [T]he police seemingly have no idea who did it and acted based on a tip sent with an email. Finding the perpetrators could take a while, and options will be fairly limited if they don't also live in Germany. Rebe has been contributing to Linux "since as early as 1998," according to the article, "and started his own T2 SD3 Embedded Linux distribution in 2004, as well." (And he's also a contributor to many other major open source projects.) The article points out that Linux and other communities "are compelled by little-to-no profit motive, so in essence, René has been providing unpaid software development for the greater good for the past two decades."
Read more of this story at Slashdot.
A Simple Guide to Data Visualization on Ubuntu for Beginners [Linux Journal - The Original Magazine of the Linux Community]
Data visualization is not just an art form but a crucial tool in the modern data analyst's arsenal, offering a compelling way to present, explore, and understand large datasets. In the context of Ubuntu, one of the most popular Linux distributions, leveraging the power of data visualization tools can transform complex data into insightful, understandable visual narratives. This guide delves deep into the art and science of data visualization within Ubuntu, providing users with the knowledge to not only create but also optimize and innovate their data presentations.
Ubuntu, known for its stability and robust community support, serves as an ideal platform for data scientists and visualization experts. The versatility of Ubuntu allows for the integration of a plethora of data visualization tools, ranging from simple plotting libraries to complex interactive visualization platforms. The essence of data visualization lies in its ability to turn abstract numbers into visual objects that the human brain can interpret much faster and more effectively than raw data.
Before diving into the creation of stunning graphics and plots, it's essential to set up your Ubuntu system for data visualization. Here's how you can prepare your environment:
System Requirementssudo apt install python3
and R using sudo apt install r-base
.pip install matplotlib
), Seaborn (pip install seaborn
), and Plotly (pip install plotly
), along with R packages like ggplot2 (install.packages("ggplot2")
).Several tools and libraries are available for Ubuntu users, each with unique features and capabilities:
Bridging the Gap: The First Enterprise-Grade Linux Solution for the Cloud-to-Edge Continuum [Linux Journal - The Original Magazine of the Linux Community]
As the Linux market is set to soar to nearly USD 100 billion by 2032,1 businesses are facing mounting challenges in managing increasingly complex workloads spanning from the cloud to the edge. Traditional Linux distributions are not built to meet the specific demands of these modern use cases, creating an urgent need for a more specialized, enterprise-grade solution.
Historically, enterprises have depended on general-purpose Linux distributions operating across racked servers and hybrid data centers to centrally store and process their data. But with the rapid rise of edge computing and the Internet of Things (IoT), real-time data processing closer to the source has become mission-critical. Industries like healthcare, telecommunications, industrial automation, and defense now require localized, lightning-fast processing to make real-time decisions.
This shift to edge computing and connected IoT has sparked a surge of use cases that demand specialized solutions to address unique operational requirements such as size, performance, serviceability, and security. For instance, the telecommunications sector demands carrier-grade Linux (CGL) and edge vRAN solutions with reliability requirements exceeding 99.999% uptime.
Yet, traditional enterprise Linux distributions—while robust for central data centers—are too general to meet the diverse, exacting needs of IoT and edge environments. Linux offerings are continuing to expand beyond conventional distributions like Debian, Ubuntu, and Fedora, but the market lacks a unified platform that can effectively bridge the gap between edge and cloud workloads.
To stay competitive, businesses need computing solutions that process time-sensitive data at the edge, connect intelligent devices, and seamlessly share insights across cloud environments. But no single Linux provider has yet bridged the cloud-to-edge divide—until now.
Wind River® eLxr Pro breaks new ground as the industry’s first end-to-end Linux solution that connects enterprise-grade workloads from the cloud to the edge. By delivering unmatched commercial support for the open source eLxr project, Wind River has revolutionized how businesses manage critical workloads across distributed environments—unlocking new levels of efficiency and scalability.
As a founding member and leading contributor to the eLxr project, Wind River ensures the eLxr project’s enterprise-grade Debian-derivative distribution meets the evolving needs of mission-critical environments. This deep integration provides customers with unparalleled community influence and support, making Wind River the go-to provider for secure, reliable, enterprise-grade Linux deployments.
Why Ubuntu Secure Boot is Essential for Protecting Your Computer [Linux Journal - The Original Magazine of the Linux Community]
As our reliance on technology grows, so does the need for robust security measures that protect systems from unauthorized access and malicious attacks. One critical area of focus is the system's boot process, a vulnerable phase where malware, rootkits, and other threats can potentially infiltrate and compromise the entire operating system. This is where Secure Boot, a feature of the UEFI (Unified Extensible Firmware Interface), comes into play, providing a defense mechanism against unauthorized software being loaded during the boot process.
Ubuntu, one of the most widely used Linux distributions, implements Secure Boot as part of its strategy to protect user systems from threats. While Secure Boot has stirred some debate in the open-source community due to its reliance on cryptographic signatures, its value in ensuring system integrity is undeniable. In this article, we will explore what Secure Boot is, how Ubuntu implements it, and its role in enhancing system security.
Secure Boot is a security standard developed by members of the PC industry to ensure that a device boots only using software that is trusted by the manufacturer. It is a feature of UEFI firmware, which has largely replaced the traditional BIOS in modern systems. The fundamental purpose of Secure Boot is to prevent unauthorized code—such as bootkits and rootkits—from being executed during the boot process, which could otherwise compromise the operating system at a low level.
By requiring that each piece of software involved in the boot process be signed with a trusted certificate, Secure Boot ensures that only authenticated and verified code can run. If an untrusted or unsigned bootloader or kernel is detected, the boot process will be halted to prevent any malicious software from being loaded.
How Secure Boot WorksAt its core, Secure Boot operates by maintaining a database of trusted keys and signatures within the UEFI firmware. When the system is powered on, UEFI verifies the digital signature of the bootloader, typically GRUB in Linux systems, against these trusted keys. If the bootloader’s signature matches a known trusted key, UEFI proceeds to load the bootloader, which then continues with loading the operating system kernel. Each component in this chain must have a valid cryptographic signature; otherwise, the boot process is stopped.
If a system has Secure Boot enabled, it verifies the integrity of the kernel and modules as well. This adds another layer of security, ensuring that not only the bootloader but also the OS components are secure.
LibreOffice 24.2.6 available for download, for the privacy-conscious user [Press Releases Archives - The Document Foundation Blog]
Berlin, 5 September 2024 – LibreOffice 24.2.6, the sixth minor release of the free, volunteer-supported office productivity suite for office environments and individuals, the best choice for privacy-conscious users and digital sovereignty, is available at https://www.libreoffice.org/download for Windows, macOS and Linux.
The release includes over 40 bug and regression fixes over LibreOffice 24.2.5 [1] to improve the stability and robustness of the software, as well as interoperability with legacy and proprietary document formats. LibreOffice 24.2.6 is aimed at mainstream users and enterprise production environments.
LibreOffice is the only office suite with a feature set comparable to the market leader, and offers a range of user interface options to suit all users, from traditional to modern Microsoft Office-style. The UI has been developed to make the most of different screen form factors by optimizing the space available on the desktop to put the maximum number of features just a click or two away.
LibreOffice for Enterprises
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a range of dedicated value-added features, long term support and other benefits such as SLAs: https://www.libreoffice.org/download/libreoffice-in-business/.
Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and contributes to the improvement of the LibreOffice Technology platform.
Availability of LibreOffice 24.2.6
LibreOffice 24.2.6 is available at https://www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Windows 7 SP1 and macOS 10.15. Products based on LibreOffice Technology for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/.
Next week, power users and technology enthusiasts will be able to download LibreOffice 24.8.1, the first minor release of the recently announced new version with many bug and regression fixes. A summary of the new features of the LibreOffice 24.8 ifamily s available on this blog post: https://blog.documentfoundation.org/blog/2024/08/22/libreoffice-248/.
End users looking for support will be helped by the immediate availability of the LibreOffice 24.8 Getting Started Guide, which is available for download from the following link: https://books.libreoffice.org/. In addition, they will be able to get first-level technical support from volunteers on user mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org.
LibreOffice users, free software advocates and community members can support the Document Foundation by making a donation at https://www.libreoffice.org/donate.
[1] Fixes in RC1: https://wiki.documentfoundation.org/Releases/24.2.6/RC1. Fixes in RC2: https://wiki.documentfoundation.org/Releases/24.2.6/RC2.
LibreOffice 24.8, for the privacy-conscious office suite user [Press Releases Archives - The Document Foundation Blog]
The new major release provides a wealth of new features, plus a large number of interoperability improvements
Berlin, 22 August 2024 – LibreOffice 24.8, the new major release of the free, volunteer-supported office suite for Windows (Intel, AMD and ARM), macOS (Apple and Intel) and Linux is available from our download page. This is the second major release to use the new calendar-based numbering scheme (YY.M), and the first to provide an official package for Windows PCs based on ARM processors.
LibreOffice is the only office suite, or if you prefer, the only software for creating documents that may contain personal or confidential information, that respects the privacy of the user – thus ensuring that the user is able to decide if and with whom to share the content they have created. As such, LibreOffice is the best option for the privacy-conscious office suite user, and provides a feature set comparable to the leading product on the market. It also offers a range of interface options to suit different user habits, from traditional to contemporary, and makes the most of different screen sizes by optimising the space available on the desktop to put the maximum number of features just a click or two away.
The biggest advantage over competing products is the LibreOffice Technology engine, the single software platform on which desktop, mobile and cloud versions of LibreOffice – including those provided by ecosystem companies – are based. This allows LibreOffice to offer a better user experience and to produce identical and perfectly interoperable documents based on the two available ISO standards: the Open Document Format (ODT, ODS and ODP), and the proprietary Microsoft OOXML (DOCX, XLSX and PPTX). The latter hides a large amount of artificial complexity, which may create problems for users who are confident that they are using a true open standard.
End users looking for support will be helped by the immediate availability of the LibreOffice 24.8 Getting Started Guide, which is available for download from the Bookshelf. In addition, they will be able to get first-level technical support from volunteers on user mailing lists and the Ask LibreOffice website.
New Features of LibreOffice 24.8
PRIVACY
WRITER
CALC
IMPRESS & DRAW
CHART
ACCESSIBILITY
SECURITY
INTEROPERABILITY
A video showcasing the most significant new features is available on YouTube and PeerTube.
Contributors to LibreOffice 24.8
There are 171 contributors to the new features of LibreOffice 24.8: 57% of code commits come from the 49 developers employed by companies on TDF’s Advisory Board – Collabora, allotropia and Red Hat – and other organisations, another 20% from seven developers at The Document Foundation, and the remaining 23% from 115 individual volunteer developers.
An additional 188 volunteers have committed localized strings in 160 languages, representing hundreds of people actually providing translations. LibreOffice 24.8 is available in 120 languages, more than any other desktop software, making it available to over 5.5 billion people in their native language. In addition, over 2.4 billion people speak one of these 120 languages as a second language (L2).
LibreOffice for Enterprises
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: LibreOffice in Business.
Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and improves the LibreOffice Technology platform. Products based on LibreOffice Technology are available for all major desktop operating systems (Windows, macOS, Linux and ChromeOS), mobile platforms (Android and iOS) and the cloud.
Migrations to LibreOffice
The Document Foundation has developed a migration protocol to help companies move from proprietary office suites to LibreOffice, based on the deployment of an LTS (long-term support) enterprise-optimised version of LibreOffice plus migration consulting and training provided by certified professionals who offer value-added solutions consistent with proprietary offerings. Reference: professional support page.
In fact, LibreOffice’s mature code base, rich feature set, strong support for open standards, excellent compatibility and LTS options from certified partners make it the ideal solution for organisations looking to regain control of their data and break free from vendor lock-in.
Availability of LibreOffice 24.8
LibreOffice 24.8 is available on our download page. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 [1] and Apple MacOS 10.15. LibreOffice Technology-based products for Android and iOS are listed on this page.
For users who don’t need the latest features and prefer a version that has undergone more testing and bug fixing, The Document Foundation maintains the LibreOffice 24.2 family, which includes several months of back-ported fixes. The current release is LibreOffice 24.2.5.
LibreOffice users, free software advocates and community members can support The Document Foundation with a donation on our donate page.
[1] This does not mean that The Document Foundation suggests the use of this operating system, which is no longer supported by Microsoft itself, and as such should not be used for security reasons.
Release Notes: wiki.documentfoundation.org/ReleaseNotes/24.8
Press Kit with Images: nextcloud.documentfoundation.org/s/JEe8MkDZWMmAGmS
How Linux Shapes Modern Cloud Computing [Linux Journal - The Original Magazine of the Linux Community]
Cloud computing has transformed the way businesses and individuals store, manage, and process data. At its core, cloud computing refers to the on-demand availability of computing resources—such as storage, processing power, and applications—over the internet, eliminating the need for local infrastructure. With scalability, flexibility, and cost efficiency as its hallmarks, cloud computing has become an essential element in the digital landscape.
While cloud computing can be run on various operating systems, Linux has emerged as the backbone of the majority of cloud infrastructures. Whether powering public cloud services like Amazon Web Services (AWS), Google Cloud Platform (GCP), or private clouds used by enterprises, Linux provides the performance, security, and flexibility required for cloud operations. This article delves into why Linux has become synonymous with cloud computing, its key roles in various cloud models, and the future of Linux in this ever-evolving field.
One of the primary reasons Linux is so deeply integrated into cloud computing is its open source nature. Linux is free to use, modify, and distribute, which makes it attractive for businesses and cloud service providers alike. Companies are not locked into restrictive licensing agreements and are free to tailor Linux to their specific needs, an advantage not easily found in proprietary systems like Windows.
The open source nature of Linux also fosters collaboration. Thousands of developers continuously improve Linux, making it more secure, efficient, and feature-rich. For cloud computing, where innovation is key, this continuous improvement ensures that Linux remains adaptable to the latest technological advances.
Performance and StabilityIn cloud environments, performance and uptime are critical. Any downtime or inefficiency can have a ripple effect, causing disruptions for businesses and users. Linux is renowned for its stability and high performance under heavy workloads. Its efficient handling of system resources—such as CPU and memory management—enables cloud providers to maximize performance and minimize costs. Additionally, Linux’s stability ensures that systems run smoothly without frequent crashes or the need for constant reboots, a crucial factor in maintaining high availability for cloud services.
Unlocking the Secrets of Writing Custom Linux Kernel Drivers for Smooth Hardware Integration [Linux Journal - The Original Magazine of the Linux Community]
Kernel drivers are the bridge between the Linux operating system and the hardware components of a computer. They play a crucial role in managing and facilitating communication between the OS and various hardware devices, such as network cards, storage devices, and more. Writing custom kernel drivers allows developers to interface with new or proprietary hardware, optimize performance, and gain deeper control over system resources.
In this article, we will explore the intricate process of writing custom Linux kernel drivers for hardware interaction. We'll cover the essentials, from setting up your development environment to advanced topics like debugging and performance optimization. By the end, you'll have a thorough understanding of how to create a functional and efficient driver for your hardware.
Before diving into driver development, it's important to have a foundational knowledge of Linux, programming, and kernel development. Here’s what you need to know:
Basic Linux KnowledgeFamiliarity with Linux commands, file systems, and system architecture is essential. You'll need to navigate through directories, manage files, and understand how the Linux OS functions at a high level.
Programming SkillsKernel drivers are primarily written in C. Understanding C programming and low-level system programming concepts are crucial for writing effective drivers. Knowledge of data structures, memory management, and system calls will be particularly useful.
Kernel Development BasicsUnderstanding the difference between kernel space and user space is fundamental. Kernel space is where drivers and the core of the operating system run, while user space is where applications operate. Familiarize yourself with kernel modules, which are pieces of code that can be loaded into the kernel at runtime.
Having a properly configured development environment is key to successful kernel driver development. Here’s how to get started:
Linux Distribution and ToolsChoose a Linux distribution that suits your needs. Popular choices for kernel development include Ubuntu, Fedora, and Debian. Install essential development tools, including:
You can install these tools using your package manager. For example, on Ubuntu, you can use:
sudo apt-get install build-essential sudo apt-get install linux-headers-$(uname -r)
Linux Filesystem Hierarchy: Your Guide to Understanding Its Layout [Linux Journal - The Original Magazine of the Linux Community]
Navigating the Linux filesystem hierarchy can be a daunting task for newcomers and even seasoned administrators. Unlike some other operating systems, Linux follows a unique directory structure that is both systematic and crucial for system management and operation. Understanding this structure is essential for efficient system administration, troubleshooting, and software management. In this article, we’ll dive deep into the Linux filesystem hierarchy, exploring each directory's purpose and significance.
/
)At the pinnacle of the Linux filesystem hierarchy is the root directory, denoted by a single forward slash (/
). This directory is the starting point from which all other directories branch out. Think of it as the base of a tree, with all other directories extending from it.
The root directory is essential for the operating system’s overall structure, providing the foundation upon which the entire filesystem is built. All files and directories, regardless of their location, can ultimately be traced back to the root directory.
Understanding the primary directories within the Linux filesystem is crucial for effective navigation and management. Here’s a detailed look at each significant directory:
/bin
/bin
directory houses essential binary executables that are necessary for the system to function correctly, even in single-user mode. These binaries are crucial for basic system operations and recovery.ls
(list directory contents), cp
(copy files), and rm
(remove files). These utilities are used by both system administrators and regular users./sbin
/bin
, the /sbin
directory contains system binaries, but these are primarily administrative commands used for system maintenance and configuration. These binaries are typically used by the root user or system administrators.fsck
(filesystem check), reboot
(reboot the system), and ifconfig
(network interface configuration) are located here./etc
Rust for Linux Maintainer Steps Down in Frustration With 'Nontechnical Nonsense' [Slashdot: Linux]
Efforts to add Rust code to the Linux kernel has suffered a setback as one of the maintainers of the Rust for Linux project has stepped down -- citing frustration with "nontechnical nonsense," according to The Register: Wedson Almeida Filho, a software engineer at Microsoft who has overseen the Rust for Linux project, announced his resignation in a message to the Linux kernel development mailing list. "I am retiring from the project," Filho declared. "After almost four years, I find myself lacking the energy and enthusiasm I once had to respond to some of the nontechnical nonsense, so it's best to leave it up to those who still have it in them." [...] Memory safety bugs are regularly cited as the major source of serious software vulnerabilities by organizations overseeing large projects written in C and C++. So in recent years there's been a concerted push from large developers like Microsoft and Google, as well as from government entities like the US Cybersecurity and Infrastructure Security Agency, to use memory-safe programming languages -- among them Rust. Discussions about adding Rust to Linux date back to 2020 and were realized in late 2022 with the release of Linux 6.1. "I truly believe the future of kernels is with memory-safe languages," Filho's note continued. "I am no visionary but if Linux doesn't internalize this, I'm afraid some other kernel will do to it what it did to Unix."
Read more of this story at Slashdot.
Linux 6.12 To Optionally Display A QR Code During Kernel Panics [Slashdot: Linux]
New submitter meisdug writes: A new feature has been submitted for inclusion in Linux 6.12, allowing the display of a QR code when a kernel panic occurs using the DRM Panic handler. This QR code can capture detailed error information that is often missed in traditional text-based panic messages, making it more user-friendly. The feature, written in Rust, is optional and can be enabled via a specific build switch. This implementation follows similar ideas from other operating systems and earlier discussions in the Linux community.
Read more of this story at Slashdot.
EmuDeck Enters the Mini PC Market With Linux-Powered 'EmuDeck Machines' [Slashdot: Linux]
An anonymous reader quotes a report from overkill.wtf: The team behind popular emulation tool EmuDeck is today announcing something rather special: they've spent the first half of 2024 working on their very first hardware product, called the EmuDeck Machine, and it's due to arrive before the year is out. This EmuDeck Machine is an upcoming, crowdfunded, retro emulation mini PC running Bazzite, a Linux-based system similar to SteamOS. [...] This new EmuDeck Machine comes in two variants, the EM1 running an Intel N97 APU, and the EM2 -- based on an AMD Ryzen 8600G. While both machines are meant as emulation-first devices, the AMD-based variant can easily function as a console-like PC. This is also thanks to some custom work done by the team: "We've optimized the system for maximum power. The default configuration of an 8600G gets you 32 FPS in Cyberpunk; we've managed to reach 47 FPS with a completely stable system, or 60FPS if you use FSR." Both machines will ship with a Gamesir Nova Lite controller and EmuDeck preinstalled naturally. The team has also preinstalled all available Decky plugins. But that's not all: if the campaign is successful, the EmuDeck team will also work on a docking station for the EM2 that will upgrade the graphics to an AMD Radeon 7600 desktop GPU. With this, in games like Cyberpunk 2077, you'll be able to reach 160 FPS in 1080p as per EmuDeck's measurements. You can preorder the EmuDeck Machines via Indigogo, starting at $322 and shipping in December.
Read more of this story at Slashdot.
'Uncertainty' Drives LinkedIn To Migrate From CentOS To Azure Linux [Slashdot: Linux]
The Register's Liam Proven reports: Microsoft's in-house professional networking site is moving to Microsoft's in-house Linux. This could mean that big changes are coming for the former CBL-Mariner distro. Ievgen Priadka's post on the LinkedIn Engineering blog, titled Navigating the transition: adopting Azure Linux as LinkedIn's operating system, is the visible sign of what we suspect has been a massive internal engineering effort. It describes some of the changes needed to migrate what the post calls "most of our fleet" from the end-of-life CentOS 7 to Microsoft Azure Linux -- the distro that grew out of and replaced its previous internal distro, CBL-Mariner. This is an important stage in a long process. Microsoft acquired LinkedIn way back in 2016. Even so, as recently as the end of last year, we reported that a move to Azure had been abandoned, which came a few months after it laid off almost 700 LinkedIn staff -- the majority in R&D. The blog post is over 3,500 words long, so there's quite a lot to chew on -- and we're certain that this has been passed through and approved by numerous marketing and management people and scoured of any potentially embarrassing admissions. Some interesting nuggets remain, though. We enjoyed the modest comment that: "However, with the shift to CentOS Stream, users felt uncertain about the project's direction and the timeline for updates. This uncertainty created some concerns about the reliability and support of CentOS as an operating system." [...] There are some interesting technical details in the post too. It seems LinkedIn is running on XFS -- also the RHEL default file system, of course -- with the notable exception of Hadoop, and so the Azure Linux team had to add XFS support. Some CentOS and actual RHEL is still used in there somewhere. That fits perfectly with using any of the RHELatives. However, the post also mentions that the team developed a tool to aid with deploying via MaaS, which it explicitly defines as Metal as a Service. MaaS is a Canonical service, although it does support other distros -- so as well as CentOS, there may have been some Ubuntu in the LinkedIn stack as well. Some details hint at what we suspect were probably major deployment headaches. [...] Some of the other information covers things the teams did not do, which is equally informative. [...]
Read more of this story at Slashdot.
Announcement of LibreOffice 24.2.5 Community, optimized for the privacy-conscious user [Press Releases Archives - The Document Foundation Blog]
Berlin, 11 July 2024 – LibreOffice 24.2.5 Community, the fifth minor release of the free, volunteer-supported office productivity suite for office environments and individuals, the best choice for privacy-conscious users and digital sovereignty, is available at www.libreoffice.org/download for Windows, macOS and Linux.
The release includes more than 70 bug and regression fixes over LibreOffice 24.2.4 [1] to improve the stability and robustness of the software, as well as interoperability with legacy and proprietary document formats. LibreOffice 24.2.5 Community is the most advanced version of the office suite and is aimed at power users but can be used safely in other environments.
LibreOffice is the only office suite with a feature set comparable to the market leader. It also offers a range of interface options to suit all users, from traditional to modern Microsoft Office-style, and makes the most of different screen form factors by optimising the space available on the desktop to put the maximum number of features just a click or two away.
LibreOffice for Enterprises
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a range of dedicated value-added features, long term support and other benefits such as SLAs: www.libreoffice.org/download/libreoffice-in-business/
Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and contributes to the improvement of the LibreOffice Technology platform. All products based on that platform share the same approach, optimised for the privacy-conscious user.
Availability of LibreOffice 24.2.5 Community
LibreOffice 24.2.5 Community is available at www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple macOS 10.15. Products based on LibreOffice Technology for Android and iOS are listed here: www.libreoffice.org/download/android-and-ios/
For users who don’t need the latest features and prefer a version that has undergone more testing and bug fixing, The Document Foundation maintains a version with some months of back-ported fixes. The current release has reached the end of life, so users should update to LibreOffice 24.2.5 when the new major release LibreOffice 24.8 becomes available in August.
The Document Foundation does not provide technical support for users, although they can get it from volunteers on user mailing lists and the Ask LibreOffice website: ask.libreoffice.org
LibreOffice users, free software advocates and community members can support the Document Foundation by making a donation at www.libreoffice.org/donate
[1] Fixes in RC1: wiki.documentfoundation.org/Releases/24.2.5/RC1. Fixes in RC2: wiki.documentfoundation.org/Releases/24.2.5/RC2.
LibreOffice 24.2.4 Community available for download [Press Releases Archives - The Document Foundation Blog]
Berlin, 6 June 2024 – LibreOffice 24.2.4 Community, the fourth minor release of the free, volunteer-supported office suite for personal productivity in office environments, is now available at https://www.libreoffice.org/download for Windows, MacOS and Linux.
The release includes over 70 bug and regression fixes over LibreOffice 24.2.3 [1] to improve the stability and robustness of the software. LibreOffice 24.2.4 Community is the most advanced version of the office suite, offering the best features and interoperability with Microsoft Office proprietary formats.
LibreOffice is the only office suite with a feature set comparable to the market leader. It also offers a range of interface options to suit all user habits, from traditional to modern, and makes the most of different screen form factors by optimising the space available on the desktop to put the maximum number of features just a click or two away.
LibreOffice for Enterprises
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: https://www.libreoffice.org/download/libreoffice-in-business/
Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and contributes to the improvement of the LibreOffice Technology platform.
Availability of LibreOffice 24.2.4 Community
LibreOffice 24.2.4 Community is available at https://www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple MacOS 10.15. Products based on LibreOffice Technology for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/
For users who don’t need the latest features and prefer a version that has undergone more testing and bug fixing, The Document Foundation maintains the LibreOffice 7.6 family, which includes several months of back-ported fixes. The current release is LibreOffice 7.6.7 Community, but it will soon be replaced exactly by LibreOffice 24.2.4 when the new major release LibreOffice 24.8 becomes available.
The Document Foundation does not provide technical support for users, although they can get it from volunteers on user mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org
LibreOffice users, free software advocates and community members can support the Document Foundation by making a donation at https://www.libreoffice.org/donate.
[1] Fixes in RC1: https://wiki.documentfoundation.org/Releases/24.2.4/RC1. Fixes in RC2: https://wiki.documentfoundation.org/Releases/24.2.4/RC2.
LibreOffice 7.6.7 for productivity environments [Press Releases Archives - The Document Foundation Blog]
Berlin, May 10, 2024 – LibreOffice 7.6.7 Community, the last minor release of the 7.6 line, is available from https://www.libreoffice.org/download for Windows, macOS, and Linux. This is the most thoroughly tested version, for deployments by individuals, small and medium businesses, and other organizations in productivity environments. This new minor release fixes bugs and regressions which can be looked up in the changelog [1].
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with many dedicated value-added features and other benefits such as SLA (Service Level Agreements): https://www.libreoffice.org/download/libreoffice-in-business/
Users can download LibreOffice 7.6.7 Community from the office suite website: https://www.libreoffice.org/download/. Minimum requirements are Microsoft Windows 7 SP1 and Apple macOS 10.14. LibreOffice Technology-based products for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/
The Document Foundation does not provide technical support for users, although they can be helped by volunteers on user mailing lists and on the Ask LibreOffice website: https://ask.libreoffice.org
LibreOffice users, free software advocates and community members can support The Document Foundation with a donation at https://www.libreoffice.org/donate
[1] Change log pages: https://wiki.documentfoundation.org/Releases/7.6.7/RC1 and https://wiki.documentfoundation.org/Releases/7.6.7/RC2
Announcement of LibreOffice 24.2.3 Community [Press Releases Archives - The Document Foundation Blog]
Berlin, 2 May 2024 – LibreOffice 24.2.3 Community, the third minor release of the free, volunteer-supported office suite for personal productivity in office environments, is now available at https://www.libreoffice.org/download for Windows, macOS and Linux.
The release includes around 80 bug and regression fixes over LibreOffice 24.2.2 [1] to improve the stability and robustness of the software. LibreOffice 24.2.3 Community is the most advanced version of the office suite, offering the best features and interoperability with Microsoft Office proprietary formats.
LibreOffice is the only office suite with a feature set comparable to the market leader. It also offers a range of interface options to suit all user habits, from traditional to modern, and makes the most of different screen form factors by optimising the space available on the desktop to put the maximum number of features just a click or two away.
The most significant advantage of LibreOffice over other office suites is the LibreOffice Technology engine, a single software platform for all environments: desktop, cloud and mobile. This allows LibreOffice to provide a better user experience and produce identical, and interoperable, documents based on both ISO standards: Open Document Format (ODT, ODS and ODP) for users concerned about compatibility, resilience and digital sovereignty, and the proprietary Microsoft format(DOCX, XLSX and PPTX).
A full description of all the new features of the LibreOffice 24.2 major release line can be found in the release notes [2].
LibreOffice for Enterprises
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: https://www.libreoffice.org/download/libreoffice-in-business/
Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and contributes to the improvement of the LibreOffice Technology platform.
Availability of LibreOffice 24.2.3 Community
LibreOffice 24.2.3 Community is available at https://www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple macOS 10.15. Products based on LibreOffice Technology for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/
For users who don’t need the latest features and prefer a version that has undergone more testing and bug fixing, The Document Foundation maintains the LibreOffice 7.6 family, which includes several months of back-ported fixes. The current release is LibreOffice 7.6.6 Community.
The Document Foundation does not provide technical support for users, although they can get it from volunteers on user mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org
LibreOffice users, free software advocates and community members can support the Document Foundation by making a donation at https://www.libreoffice.org/donate
[1] Fixes in RC1: https://wiki.documentfoundation.org/Releases/24.2.3/RC1. Fixes in RC2: https://wiki.documentfoundation.org/Releases/24.2.3/RC2.
[2] Release Notes: https://wiki.documentfoundation.org/ReleaseNotes/24.2
Joint release of LibreOffice 24.2.2 Community and LibreOffice 7.6.6 Community [Press Releases Archives - The Document Foundation Blog]
Berlin, 28 March 2024 – Today the Document Foundation releases LibreOffice 24.2.2 Community [1] and LibreOffice 7.6.6 Community [2], both minor releases that fix bugs and regressions to improve quality and interoperability for individual productivity.
Both versions are immediately available from https://www.libreoffice.org/download. All LibreOffice users are encouraged to update their current version as soon as possible to take advantage of improvements. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple MacOS 10.15.
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: https://www.libreoffice.org/download/libreoffice-in-business/.
The Document Foundation does not provide technical support to users, although it is available from volunteers on user mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org.
LibreOffice users, free software advocates and community members can support the Document Foundation by making a donation at https://www.libreoffice.org/donate.
[1] Change logs for LibreOffice 24.2.2 Community: https://wiki.documentfoundation.org/Releases/24.2.2/RC1 (release candidate 1) and https://wiki.documentfoundation.org/Releases/24.2.2/RC2 (release candidate 2).
[2] Change logs for LibreOffice 7.6.6 Community: https://wiki.documentfoundation.org/Releases/7.6.6/RC1 (release candidate 1) and https://wiki.documentfoundation.org/Releases/7.6.6/RC2 (release candidate 2).
Announcement of LibreOffice 24.2.1 Community [Press Releases Archives - The Document Foundation Blog]
Berlin, 29 February 2024 – LibreOffice 24.2.1 Community, the first minor release of the free, volunteer-supported office suite for personal productivity in office environments, is now available at https://www.libreoffice.org/download for Windows, MacOS and Linux.
The release includes more than 100 bug and regression fixes over LibreOffice 24.2 [1] to improve the stability and robustness of the software. LibreOffice 24.2.1 Community is the most advanced version of the office suite, offering the best features and interoperability with Microsoft Office proprietary formats.
LibreOffice is the only office suite with a feature set comparable to the market leader. It also offers a range of interface options to suit all user habits, from traditional to modern, and makes the most of different screen form factors by optimising the space available on the desktop to put the maximum number of features just a click or two away.
Highlights of LibreOffice 24.2.1 Community
The main advantage of LibreOffice over other office suites is the LibreOffice Technology engine, a single software platform for all environments: desktop, cloud and mobile. This allows LibreOffice to provide a better user experience and produce identical – and interoperable – documents based on both ISO standards: Open Document Format (ODT, ODS and ODP) for users concerned about compatibility, resilience and digital sovereignty, and the proprietary Microsoft OOXML (DOCX, XLSX and PPTX).
Most notable new features in the LibreOffice 24.2 family:
GENERAL
• Save AutoRecovery information is enabled by default, and is always creating backup copies
• Fixed various NotebookBar options, with many menu improvements, better print preview support, proper reset of customised layout, and enhanced use of radio buttons
• The Insert Special Character drop-down list now displays a character description for the selected character (and in the tooltip when you hover over it)
WRITER
• “Legal” ordered list numbering: make a given list level use Arabic numbering for all its numeric portions
• Comments can now use styles, with the Comment paragraph style being the default
• Improved various aspects of multi-page floating table support: overlap control, borders and footnotes, nesting, wrap on all pages, and related UI improvements
CALC
• A new search field has been added to the Functions sidebar deck
• The scientific number format is now supported and saved in ODF
• Highlight the Row and Column corresponding to the active cell
IMPRESS & DRAW
• The handling of small caps has been implemented for Impress
• Moved Presenter Console and Remote control settings from Tools > Options > LibreOffice Impress to Slide Show > Slide Show Settings, with improved labelling and dialogue layout
• Several improvements and fixes to templates
ACCESSIBILITY
• Several significant improvements to the handling of mouse positions and the presentation of dialogue boxes via the Accessibility APIs, allowing screen readers to present them correctly
• Improved management of IAccessible2 roles and text/object attributes, allowing screen readers to present them correctly
• Status bars in dialogs are reported with the correct accessible role so that screen readers can find and report them appropriately, while checkboxes in dialogs can be toggled using the space bar
SECURITY
• The Save with Password dialogue box now has a password strength meter
• New password-based ODF encryption that performs better, hides metadata better, and is more resistant to tampering and brute force
• Clarification of the text in the options dialogue box around the macro security settings, so that it is clear exactly what is allowed and what is not
The LibreOffice 24.2 family offers a host of enhancements and new features aimed at users sharing documents with or migrating from MS Office, building on the advanced features of the LibreOffice Technology platform for personal productivity on the desktop, mobile and in the cloud.
A full description of all the new features can be found in the release notes [2].
LibreOffice for Enterprises
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a wide range of dedicated value-added features and other benefits such as SLAs: https://www.libreoffice.org/download/libreoffice-in-business/
Every line of code developed by ecosystem companies for enterprise customers is shared with the community on the master code repository and contributes to the improvement of the LibreOffice Technology platform.
Availability of LibreOffice 24.2.1 Community
LibreOffice 24.2.1 Community is available at https://www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple MacOS 10.15. Products based on LibreOffice Technology for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/
For users who don’t need the latest features and prefer a version that has undergone more testing and bug fixing, The Document Foundation maintains the LibreOffice 7.6 family, which includes several months of back-ported fixes. The current release is LibreOffice 7.6.5 Community.
The Document Foundation does not provide technical support for users, although they can get it from volunteers on user mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org
LibreOffice users, free software advocates and community members can support the Document Foundation by making a donation at https://www.libreoffice.org/donate.
[1] Fixes in RC1: https://wiki.documentfoundation.org/Releases/24.2.1/RC1. Fixes in RC2: https://wiki.documentfoundation.org/Releases/24.2.1/RC2.
[2] Release Notes: https://wiki.documentfoundation.org/ReleaseNotes/24.2
LVM Logische volumen [linux blogs franz ulenaers]
Een
partitie van het type = "Linux LVM" kan gebruikt worden
voor logische volumen maar ook als "snapshot"
!
Een snapshot kan een exact kopie zijn van een logische
volume dat bevrozen is op een bepaald ogenblik : dit maakt het
mogelijk om consistente backups te maken van logische
volumen
terwijl de logische volumen in gebruik zijn !
Hoe installeren ?
sudo apt-get install lvm2
Cre�er een fysisch volume voor een partitie
commando = �pvcreate� partitie
voorbeeld :
partitie moet van het type = "Linux LVM" zijn !
pvcreate /dev/sda5
cre�er een fysisch volume groep
vgcreate vg_storage partitie
voorbeeld
vgcreate mijnvg /dev/sda5
voeg een logische volume toe in een volume groep
lvcreate -L grootte_in_M/G -n logische_volume_naam volume_groep
voorbeeld :
lvcreate -L 30G -n mijnhome mijnvg
activeer een volume groep
vgchange -a y naam_volume_groep
voorbeeld :
vgchange -a y mijnvg
Mijn fysische en logische volumen
fysische volume
pvcreate /dev/sda1
fysische volume groep
vgcreate mydell /dev/sda1
logische volumen
lvcreate -L 1G -n boot mydell
lvcreate -L 100G -n data mydell
lvcreate -L 50G -n home mydell
lvcreate -L 50G -n root mydell
lvcreate -L 1G swap mydell
Logische volume vergroten/verkleinen
mijn home logische volume vergroten met 1 G
lvextend -L +1G /dev/mapper/mydell-home
let op een logische volume verkleinen kan leiden tot gegevens verlies indien er te weinig plaats is .... !
lvreduce -L -1G /dev/mapper/mydell-home
toon fysische volume
sudo pvs
worden getoond : PV fysische volume , VG volume groep , Fmt formaat (normaal = lvm2) , Attr attribuut, Psize groote PV, PFree vtije plaats
PV VG Fmt Attr PSize PFree
/dev/sda6 mydell lvm2 a-- 920,68g 500,63g
sudo pvs -a
sudo pvs /dev/sda6
Backup instellingen Logische volumen
zie bijgeleverde script LVM_bkup
toon volume groep
sudo vgs
VG #PV #LV #SN Attr VSize VFree
mydell 1 6 0 wz--n- 920,68g 500,63g
toon logische volume(n)
sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
boot mydell -wi-ao---- 952,00m
data mydell -wi-ao---- 100,00g
home mydell -wi-ao---- 93,13g
mintroot mydell -wi-a----- 101,00g
root mydell -wi-ao---- 94,06g
swap mydell -wi-ao---- 30,93g
hoe een logische volume wegdoen ?
een logische volume wegdoen kan enkel maar als de fysische volume niet actief is
dit kan met het vgchange commando
vgchange -a n mydell
lvremove /dev//mijn_volumegroup/naam_logische-volume
voorbeeld :
lvremove /dev/mydell/data
hoe een fysische volume wegdoen ?
vgreduce mydell /dev/sda1
Bijlagen: LVM_bkup (0.8 KLB)
hoe een stick mounten en umounten zonder root te zijn en met je eigen rwx rechten ! [linux blogs franz ulenaers]
hoe usb stick mounten en umounten zonder root te
zijn en met rwx rechten
?
---------------------------------------------------------------------------------------------------------
(hernoem
iedere ulefr01 naar je eigen gebruikersnaam!)
gebruik het 'fatlabel' commando om een volumenaam of label toe te kennen dit als je een vfat bestandensysteem gebruikt op je usb-stick
gebruik het commando 'tune2fs' voor een ext2,3,4
om een volumenaam stick32GB te maken op je usb_stick doe je met het commando :
sudo tune2fs -L stick32GB /dev/sdc1
noot : gebruik voor /dev/sdc1 hier het juiste device !
mogelijk na het mounten zie dmesg messages : Volume was not properly unmounted. Some data may be corrupt. Please run fsck.
gebruik de file system consistency check commando fsck om dit recht te zetten
doe een umount voordat je het commando fsck uitvoer ! (gebruik het juiste device !)
fsck /dev/sdc1
noot: gebruik voor /dev/sdc1 hier je device !
Steek je stick in een usb poort en umount je stick
sudo chown ulefr01:ulefr01 /media/ulefr01/ -R
zet acl op je ext2,3,4 stick (werkt niet op een vfat !)
setfacl -m u:ulefr01:rwx /media/ulefr01
met getfact kun je acl zien
getfacl /media/ulefr01
met het ls commando kun je het resultaat zien
ls /media/ulefr01 -dla
drwxrwx--- 5 ulefr01 ulefr01 4096 okt 1 18:40 /media/ulefr01
noot: indien de �+� aanwezig is dan is acl reeds aanwezig, zoals op volgende lijn :
drwxrwx---+ 5 ulefr01 ulefr01 4096 okt 1 18:40 /media/ulefr01
Steek je stick in een usb poort en kijk of mounten automatisch gebeurd
check rechten van bestaande bestanden en mappen op je stick
ls * -la
indien root of andere rechten reeds aanwezig , herzetten met volgend commando
sudo chown ulefr01:ulefr01 /media/ulefr01/stick32GB -R
cd /media/ulefr01
mkdir mmcblk16G stick32GB stick16gb
voeg een lijn toe voor iedere stick
voorbeelden
LABEL=mmcblk16G /media/ulefr01/mmcblk16G ext4 user,exec,defaults,noatime,acl,noauto 0 0
LABEL=stick32GB /media/ulefr01/stick32GB ext4 user,exec,defaults,noatime,acl,noauto 0 0
LABEL=stick16gb /media/ulefr01/stick16gb vfat user,defaults,noauto 0 0
het volgende moet nu mogelijk zijn :
mount en umount zonder root te zijn
noot : je kunt de umount niet doen als de mount gedaan is door root ! Indien dat het geval is dan moet je eerst de umount met root ; daarna de mount als gebruiker dan kun je ook de umount doen .
zet een nieuw bestand op je stick zonder root te zijn
zet een nieuw map op je stick zonder root te zijn
check of je nieuwe bestanden kunt aanmaken zonder root te zijn
touch test
ls test -la
rm test
Zet acl list [linux blogs franz ulenaers]
noot: meestal mogelijk op linux bestandsystemen : btrfs, ext2, ext3, ext4 en Reiserfs !
Hoe een acl zetten voor ��n gebruiker ?
setfacl -m u:ulefr01:rwx /home/ulefr01
noot: kies ipv ulefr01 hier je eigen gebruikersnaam
Hoe een acl afzetten ?
setfacl -x u:ulefr01 /home/ulefr01
Hoe een acl zetten voor twee of meer gebruikers ?
setfacl -m u:ulefr01:rwx /home/ulefr01
setfacl -m u:myriam:r-x /home/ulefr01
noot: kies ipv myriam je tweede gebruikersnaam; hier heeft myriam geen w write toegang maar wel r read en x exec !
Hoe een lijst opvragen van de ingestelde acl ?
getfacl home/ulefr01
getfacl: Voorafgaande '/' in absolute padnamen worden verwijderd # file: home/ulefr01 # owner: ulefr01 # group: ulefr01 user::rwx user:ulefr01:rwx user:myriam:r-x group::--- mask::rwx other::---
Hoe het resultaat nakijken ?
getfacl home/ulefr01
zie hierboven
ls /home/ulefr01 -dla
drwxrwx---+ ulefr01 ulefr01 4096 okt 1 18:40 /home/ulefr01
zie + sign !
Het beste bestandensysteem (meest performant) op een USB stick , hoe opzetten ? [linux blogs franz ulenaers]
het beste bestandensysteem (meest performant) is ext4
hoe opzetten ?
mkfs.ext4 $device
zet eerst journal af
tune2fs -O ^has_journal $device
doe journaling alleen met data_writeback
tune2fs -o journal_data_writeback $device
gebruik geen reserved spaces en zet het op nul.
tune2fs -m 0 $device
voor bovenstaande 3 acties kan bijgeleverde bash script gebruikt worden :
bestand USBperf
# USBperfext4
echo 'USBperf'
echo '--------'
echo 'ext4 device ?'
read device
echo "device= $device"
echo 'ok ?'
read ok
if [ $ok == ' ' ] || [ $ok == 'n' ] || [ $ok == 'N' ]
then
echo 'nok - dus stoppen'
exit 1
fi
echo "doe : no journaling ! tune2fs -O ^has_journal $device"
tune2fs -O ^has_journal $device
echo "use data mode for filesystem as writeback doe : tune2fs -o journal_data $device"
tune2fs -o journal_data_writeback $device
echo "disable reserved space "
tune2fs -m 0 $device
echo 'gedaan !'
read ok
echo "device= $device"
exit 0
pas bestand /etc/fstab aan voor je USB
gebruik optie �noatime�
Encryptie [linux blogs franz ulenaers]
Met encryptie kan men de gegevens op je computer beveiligen, door de gegevens onleesbaar maken voor de buitenwereld !
Hoe kan men een bestandssysteem encrypteren ?
installeer de volgende open source pakketten :
loop-aes-utils en cryptsetup
apt-get install loop-aes-utils
apt-get install cryptsetup
Hoe een beveiligd bestandsysteem aanmaken ?
Je kunt automatisch je bestandssysteem beschikbaar maken door een volgende entry in je /etc/fstab :
/home/cryptfile /mnt/crypt ext3 auto,encryption=aes,user,exec 0 0
....
Je kunt je encryptie afzetten dmv.App Launchers for Ubuntu 19.04 [Tech Drive-in]
During the transition period, when GNOME Shell and Unity were pretty rough around the edges and slow to respond, 3rd party app launchers were a big deal. Overtime the newer desktop environments improved and became fast, reliable and predictable, reducing the need for a alternate app launchers.
As a result, many third-party app launchers have either slowed down development or simply seized to exist. Ulauncher seems to be the only one to have bucked the trend so far. Synpase and Kupfer on the other hand, though old and not as actively developed anymore, still pack a punch. Since Kupfer is too old school, we'll only be discussing Synapse and Ulauncher here.
sudo dpkg -i ~/Downloads/ulauncher_4.3.2.r8_all.deb
sudo apt-get install -f
A Standalone Video Player for Netflix, YouTube, Twitch on Ubuntu 19.04 [Tech Drive-in]
Snap apps are a godsend. ElectronPlayer is an Electron based app available on Snapstore that doubles up as a standalone media player for video streaming services such as Netflix, YouTube, Twitch, Floatplane etc.
And it works great on Ubuntu 19.04 "disco dingo". From what we've tested, Netflix works like a charm, so does YouTube. ElectronPlayer also has a picture-in-picture mode that let it run above desktop and full screen applications.
sudo snap install electronplayer
Howto Upgrade to Ubuntu 19.04 from Ubuntu 18.10, Ubuntu 18.04 LTS [Tech Drive-in]
As most of you should know already, Ubuntu 19.04 "disco dingo" has been released. A lot of things have changed, see our comprehensive list of improvements in Ubuntu 19.04. Though it is not really necessary to make the jump, I'm sure many here would prefer to have the latest and greatest from Ubuntu. Here's how you upgrade to Ubuntu 19.04 from Ubuntu 18.10 and Ubuntu 18.04.
Upgrading to Ubuntu 19.04 from Ubuntu 18.04 LTS is tricky. There is no way you can make the jump from Ubuntu 18.04 LTS directly to Ubuntu 19.04. For that, you need to upgrade to Ubuntu 18.10 first. Pretty disappointing, I know. But when upgrading an entire OS, you can't be too careful.
And the process itself is not as tedious or time consuming à la Windows. And also unlike Windows, the upgrades are not forced upon you while you're in middle of something.
sudo do-release-upgrade -d
15 Things I Did Post Ubuntu 19.04 Installation [Tech Drive-in]
Ubuntu 19.04, codenamed "Disco Dingo", has been released (and upgrading is easier than you think). I've been on Ubuntu 19.04 since its first Alpha, and this has been a rock solid release as far I'm concerned. Changes in Ubuntu 19.04 are more evolutionary though, but availability of the latest Linux Kernel version 5.0 is significant.
sudo apt update && sudo apt dist-upgrade
sudo apt install gnome-tweaks
sudo apt install ubuntu-restricted-extras
gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'
gsettings reset org.gnome.shell.extensions.dash-to-dock click-action
sudo apt install chrome-gnome-shell
sudo add-apt-repository ppa:system76/pop sudo apt-get update sudo apt install pop-icon-theme pop-gtk-theme pop-gnome-shell-theme sudo apt install pop-wallpapers
Ubuntu 19.04 Gets Newer and Better Wallpapers [Tech Drive-in]
A "Disco Dingo" themed wallpaper was already there. But the latest update bring a bunch of new wallpapers as system defaults on Ubuntu 19.04.
LinuxBoot: A Linux Foundation Project to replace UEFI Components [Tech Drive-in]
UEFI has a pretty bad reputation among many in the Linux community. UEFI unnecessarily complicated Linux installation and distro-hopping in Windows pre-installed machines, for example. Linux Boot project by Linux Foundation aims to replace some firmware functionality like the UEFI DXE phase with Linux components.
What is UEFI?
UEFI is a standard or a specification that replaced legacy BIOS firmware, which was the industry standard for decades. Essentially, UEFI defines the software components between operating system and platform firmware.
UEFI boot has three phases: SEC, PEI and DXE. Driver eXecution Environment or DXE Phase in short: this is where UEFI system loads drivers for configured devices. LinuxBoot will replaces specific firmware functionality like the UEFI DXE phase with a Linux kernel and runtime.
LinuxBoot and the Future of System Startup
"Firmware has always had a simple purpose: to boot the OS. Achieving that has become much more difficult due to increasing complexity of both hardware and deployment. Firmware often must set up many components in the system, interface with more varieties of boot media, including high-speed storage and networking interfaces, and support advanced protocols and security features." writes Linux Foundation.
Look up Uber Time, Price Estimates on Terminal with Uber CLI [Tech Drive-in]
The worldwide phenomenon that is Uber needs no introduction. Uber is an immensely popular ride sharing, ride hailing, company that is valued in billions. Uber is so disruptive and controversial that many cities and even countries are putting up barriers to protect the interests of local taxi drivers.
Enough about Uber as a company. To those among you who regularly use Uber app for booking a cab, Uber CLI could be a useful companion.
sudo apt update sudo apt install nodejs npm npm install uber-cli -g
uber time 'pickup address here'Easy right? I did some testing with places and addresses I'm familiar with, where Uber cabs are fairly common. And I found the results to be fairly accurate. Do test and leave feedback. Uber CLI github page for more info.
uber price -s 'start address' -e 'end address'
UBports Installer for Ubuntu Touch is just too good! [Tech Drive-in]
Even as someone who bought into the Ubuntu Touch hype very early, I was not expecting much from UBports to be honest. But to my pleasent surprise, UBports Installer turned my 4 year old BQ Aquaris E4.5 Ubuntu Edition hardware into a slick, clean, and usable phone again.
Retro Terminal that Emulates Old CRT Display (Ubuntu 18.10, 18.04 PPA) [Tech Drive-in]
We've featured cool-retro-term before. It is a wonderful little terminal emulator app on Ubuntu (and Linux) that adorns this cool retro look of the old CRT displays.
Let the pictures speak for themselves.
sudo add-apt-repository ppa:vantuz/cool-retro-term sudo apt update sudo apt install cool-retro-term
Google's Stadia Cloud Gaming Service, Powered by Linux [Tech Drive-in]
Unless you live under a rock, you must've been inundated with nonstop news about Google's high-octane launch ceremony yesterday where they unveiled the much hyped game streaming platform called Stadia.
Stadia, or Project Stream as it was earlier called, is a cloud gaming service where the games themselves are hosted on Google's servers, while the visual feedback from the game is streamed to the player's device through Google Chrome. If this technology catches on, and if it works just as good as showed in the demos, Stadia could be what the future of gaming might look like.
Ubuntu 19.04 Updates - 7 Things to Know [Tech Drive-in]
Ubuntu 19.04 is scheduled to arrive in another 30 days has been released. I've been using it for the past week or so, and even as a pre-beta, the OS is pretty stable and not buggy at all. Here are a bunch of things you should know about the yet to be officially released Ubuntu 19.04.
Purism: A Linux OS is talking Convergence again [Tech Drive-in]
The hype around "convergence" just won't die it seems. We have heard it from Ubuntu a lot, KDE, even from Google and Apple in fact. But the dream of true convergence, a uniform OS experience across platforms, never really materialised. Even behemoths like Apple and Googled failed to pull it off with their Android/iOS duopoly. Purism's Debian based PureOS wants to change all that for good.
"Purism is beating the duopoly to that dream, with PureOS: we are now announcing that Purism’s PureOS is convergent, and has laid the foundation for all future applications to run on both the Librem 5 phone and Librem laptops, from the same PureOS release", announced Jeremiah Foster, the PureOS director at Purism (by duopoly, he was referring to Android/iOS platforms that dominate smartphone OS ecosystem).
"it turns out that this is really hard to do unless you have complete control of software source code and access to hardware itself. Even then, there is a catch; you need to compile software for both the phone’s CPU and the laptop CPU which are usually different architectures. This is a complex process that often reveals assumptions made in software development but it shows that to build a truly convergent device you need to design for convergence from the beginning."
Komorebi Wallpapers display Live Time & Date, Stunning Parallax Effect on Ubuntu [Tech Drive-in]
Live wallpapers are not a new thing. In fact we have had a lot of live wallpapers to choose from on Linux 10 years ago. Today? Not so much. In fact, be it GNOME or KDE, most desktops today are far less customizable than it used to be. Komorebi wallpaper manager for Ubuntu is kind of a way back machine in that sense.
sudo apt remove komorebi
Snap Install Mario Platformer on Ubuntu 18.10, Ubuntu 18.04 LTS [Tech Drive-in]
Nintendo's Mario needs no introduction. This game defined our childhoods. Now you can install and have fun with an unofficial version of the famed Mario platformer in Ubuntu 18.10 via this Snap package.
sudo snap install mari0
sudo snap connect mari0:joystick
Florida based Startup Builds Ubuntu Powered Aerial Robotics [Tech Drive-in]
Apellix is a Florida based startup that specialises in aerial robotics. They intend to create safer work environments by replacing workers with its task-specific drones to complete high-risk jobs at dangerous/elevated work sites.
Openpilot: An Opensource Alternative to Tesla Autopilot, GM Super Cruise [Tech Drive-in]
Openpilot is an opensource driving agent which at the moment can perform industry-standard functions such as Adaptive Cruise Control and Lane Keeping Assist System for a select few auto manufacturers.
Oranchelo - The icon theme to beat on Ubuntu 18.10 [Tech Drive-in]
OK, that might be an overstatement. But Oranchelo is good, really good.
sudo add-apt-repository ppa:oranchelo/oranchelo-icon-theme sudo apt update sudo apt install oranchelo-icon-theme
11 Things I did After Installing Ubuntu 18.10 Cosmic Cuttlefish [Tech Drive-in]
Have been using "Cosmic Cuttlefish" since its first beta. It is perhaps one of the most visually pleasing Ubuntu releases ever. But more on that later. Now let's discuss what can be done to improve the overall user-experience by diving deep into the nitty gritties of Canonical's brand new flagship OS.
sudo apt install ubuntu-restricted-extras
sudo apt install gnome-tweaks
gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'
gsettings reset org.gnome.shell.extensions.dash-to-dock click-action
sudo add-apt-repository ppa:slgobinath/safeeyes sudo apt update sudo apt install safeeyes
sudo add-apt-repository ppa:system76/pop sudo apt-get update sudo apt install pop-icon-theme pop-gtk-theme pop-gnome-shell-theme sudo apt install pop-wallpapers
sudo gedit /etc/default/apport
RIOT OS: A tiny Opensource OS for the 'Internet of Things' (IoT) [Tech Drive-in]
"RIOT powers the Internet of Things like Linux powers the Internet." RIOT is a small, free and opensource operating system for the memory constrained, low power wireless IoT devices.
IBM, the 6th biggest contributor to Linux Kernel, acquires RedHat for $34 Billion [Tech Drive-in]
The $34 billion all cash deal to purchase opensource pioneer Red Hat is IBM's biggest ever acquisition by far. The deal will give IBM a major foothold in fast-growing cloud computing market and the combined entity could give stiff competition to Amazon's cloud computing platform, AWS. But what about Red Hat and its future?
"Open source is the default choice for modern IT solutions, and I’m incredibly proud of the role Red Hat has played in making that a reality in the enterprise,” said Jim Whitehurst, President and CEO, Red Hat. “Joining forces with IBM will provide us with a greater level of scale, resources and capabilities to accelerate the impact of open source as the basis for digital transformation and bring Red Hat to an even wider audience – all while preserving our unique culture and unwavering commitment to open source innovation."Predicting the future can be tricky. A lot of things can go wrong. But one thing is sure, the acquisition of Red Hat by IBM is nothing like the Oracle - Sun deal. Between them, IBM and Red Hat must have contributed more to the open source community than any other organization.
How to Upgrade from Ubuntu 18.04 LTS to 18.10 'Cosmic Cuttlefish' [Tech Drive-in]
One day left before the final release of Ubuntu 18.10 codenamed "Cosmic Cuttlefish". This is how you make the upgrade from Ubuntu 18.04 to 18.10.
$ sudo apt update && sudo apt dist-upgrade $ sudo apt autoremove
$ sudo gedit /etc/update-manager/release-upgrades
$ sudo do-release-upgrade -d
Meet 'Project Fusion': An Attempt to Integrate Tor into Firefox [Tech Drive-in]
A real private mode in Firefox? A Tor integrated Firefox could just be that. Tor Project is currently working with Mozilla to integrate Tor into Firefox.
"Our ultimate goal is a long way away because of the amount of work to do and the necessity to match the safety of Tor Browser in Firefox when providing a Tor mode. There's no guarantee this will happen, but I hope it will and we will keep working towards it."As If you want to help, Firefox bugs tagged 'fingerprinting' in the whiteboard are a good place to start. Further reading at TOR 'Project Fusion' page.
City of Bern Awards Switzerland's Largest Open Source Contract for its Schools [Tech Drive-in]
In another major win in a span of weeks for the proponents of open source solutions in EU, Bern, the capital of Switzerland, is pushing ahead with its plans to adopt open source tools as its software of choice for all its public schools. If all goes well, some 10,000 students in Switzerland schools could soon start getting their training using an IT infrastructure that is largely open source.
Germany says No to Public Cloud, Chooses Nextcloud's Open Source Solution [Tech Drive-in]
Germany's Federal Information Technology Centre (ITZBund) opts for an on-premise cloud solution which unlike those fancy Public cloud solutions, is completely private and under its direct control.
"Nextcloud is pleased to announce that the German Federal Information Technology Center (ITZBund) has chosen Nextcloud as their solution for efficient and secure file sharing and collaboration in a public tender. Nextcloud is operated by the ITZBund, the central IT service provider of the federal government, and made available to around 300,000 users. ITZBund uses a Nextcloud Enterprise Subscription to gain access to operational, scaling and security expertise of Nextcloud GmbH as well as long-term support of the software."ITZBund employs about 2,700 people that include IT specialists, engineers and network and security professionals. After the successful completion of the pilot, a public tender was floated by ITZBund which eventually selected Nextcloud as their preferred partner. Nextcloud scored high on security requirements and scalability, which it addressed through its unique Apps concept.
LG Makes its webOS Operating System Open Source, Again! [Tech Drive-in]
Not many might remember HP's capable webOS. The open source webOS operating system was HP's answer to Android and iOS platforms. It was slick and very user-friendly from the start, some even considered it a better alternative to Android for Tablets at the time. But like many other smaller players, HP's webOS just couldn't find enough takers, and the project was abruptly ended and sold off of to LG.
Staat New York doet kerstinkopen bij ASML [Computable]
De Amerikaanse staat New York gaat voor een miljard dollar chipmachines aanschaffen bij ASML. De investering maakt deel uit van een tien miljard kostend plan om nabij de Universiteit van Albany een nanotech-complex neer te zetten.
Sogeti mag verder sleutelen aan datawarehouse KB [Computable]
Sogeti wordt de komende drie jaar opnieuw de datapartner van de Koninklijke Bibliotheek (KB). Met een optie op verlenging tot maximaal zes jaar. Het it-bedrijf is sinds 2016 beheerder van het datawarehouse en krijgt nu als...
HPE haalt gen-ai-banden met Nvidia aan [Computable]
Infrastructuurspecialist Hewlett Packard Enterprise (HPE) gaat nauwer samenwerken met ai-hardware en softwareleverancier Nvidia. Samen bieden ze vanaf januari 2024 een krachtige enterprise computingoplossing voor generatieve artificiële intelligentie (gen-ai).
Econocom kondigt internationale tak aan: Gather [Computable]
De Frans-Belgische it-dienstverlener Econocom heeft een apart, internationaal opererend bedrijfsonderdeel opgezet onder de naam Gather. Deze tak bundelt de expertise op het gebied van audio-visuele oplossingen, unified communications en it-producten en -diensten, gericht op grotere organisaties...
Coalitie: verbeter fietsveiligheid met sensoren [Computable]
De pas opgerichte Coalition for Cyclist Safety, met fietsfabrikant Koninklijke Gazelle aan boord, spant zich in om de fietsveiligheid te verbeteren met behulp van sensortechnologie, ook wel vehicle-to-everything-technologie (v2x) genoemd. De auto-industrie geldt als lichtend voorbeeld;...
Ambtenaar mag onder voorwaarden oefenen met gen-ai [Computable]
Het lukt het kabinet niet meer dit jaar nog met een totale visie op generatieve ai (gen-ai) te komen. De Tweede Kamer kan zo’n integraal beeld van de impact die deze technologie heeft op onze maatschappij...
Softwareleverancier Topdesk ontvangt groeigeld [Computable]
Topdesk uit Delft krijgt een kapitaalinjectie van tweehonderd miljoen euro voor groei en verdere ontwikkeling. CVC Capital Partners dat een minderheidsbelang neemt, gaat de leverancier van software voor servicemanagement meer slagkracht bieden.
Vier miljoen voor stimulering datacenter-onderwijs EU [Computable]
De Europese Commissie (EC) heeft een subsidie van vier miljoen euro toegekend aan het project Colleges for European Datacenter Education (Cedce). Doel hiervan is het aanbieden van kwalitatief hoogwaardig onderwijs gericht op datacenters. Het project start...
Startup Nedscaper haalt Fox-IT-oprichter aan boord [Computable]
Mede-oprichter van ict-beveiliger Fox-IT, Menno van der Marel, wordt strategisch directeur van Nedscaper. Die Nederlands/Zuid-Afrikaanse startup levert securitydiensten voor Microsoft-omgevingen. Van der Marel steekt ook 2,2 miljoen euro in het bedrijf.
PQR-ceo Marijke Kasius schuift door naar Bechtle [Computable]
Bechtle benoemt per 1 januari Marijke Kasius tot landendirecteur voor de bedrijven van de groep in Nederland. De 39-jarige Kasius geeft momenteel samen met Marco Lesmeister leiding aan it-dienstverlener PQR. Die positie wordt ingenomen door Marc...
Oud-IBM- en Ajax-directeur Frank Kales overleden [Computable]
Frank Kales is op 8 december jongstleden overleden op 81-jarige leeftijd. Hij was onder voetbalkenners bekend als algemeen directeur van voetbalclub Ajax in de turbulente periode 1999-2000. Daarvoor werkte hij decennialang bij IBM waar hij uiteindelijk...
EU AI Act jaagt softwarebedrijven op kosten [Computable]
De komst van de uitgebreide en soms ook diepgaande artificiële intelligentie (ai)-regelgeving waartoe EU-onderhandelaars afgelopen nacht overeenstemming hebben bereikt, zal niet zonder financiële gevolgen blijven voor ondernemers. 'We hebben een ai-deal. Maar wel een dure,' zegt...
Historisch ai-akkoord EU legt ChatGPT aan banden [Computable]
De EU AI Act krijgt regels voor de ‘foundation models’ die aan de basis liggen van de enorme vooruitgang op gebied van ai. De Europese Commissie is het afgelopen nacht hierover eens geworden met het Europees...
Eset levert dns-filtering aan KPN-klanten [Computable]
Ict-beveiliger Eset levert domain name system (dns)-filtering aan telecombedrijf KPN. Met deze dienst zouden thuisnetwerken van KPN-klanten beter worden beschermd tegen malware, phishing en ongewenste inhoud.
Overheden werken nog niet goed met Woo [Computable]
Overheidsorganisaties passen de nieuwe Wet open overheid (Woo) vaak nog niet effectief toe, voornamelijk door beperkte capaciteit en een gebrek aan prioriteit. Ambtenaren voelen zich bovendien beperkt in hun vrijheid om advies te geven. Dit blijkt...
West-Brabantse scholen helpen mkb via hackathon [Computable]
Studenten van de West-Brabantse onderwijsinstellingen Avans, BUas en Curio gaan ondernemers ondersteunen bij hun digitale ontwikkeling. In de zogeheten Digiwerkplaats Mkb vindt deze vrijdag een hackathon plaats, waarbij twintig Avans-studenten in groepjes een duurzaamheidsdashboard voor drie...
CWI organiseert Cobol-event voor meer urgentie [Computable]
Het Centrum Wiskunde & Informatica (CWI) organiseert 18 januari een evenement over de toekomst van Cobol en mainframes. Voor deze strategische Cobol-dag werkt het centrum samen met Quuks en Software Improvement Group (SIG). Volgens de organisatie...
Plan voor cloud-restricties splijt EU [Computable]
Een groot front vormt zich tegen de plannen van de Europese Commissie voor soevereiniteit-vereisten die vooral Franse cloudbedrijven bevoordelen. Nederland heeft zich in zijn verzet inmiddels verzekerd van de steun van dertien andere EU-lidstaten, waaronder Duitsland....
Unilever kiest weer voor warehousesysteem SAP [Computable]
Wegens de verdubbeling van de productiecapaciteit van de fabriek in het Hongaarse Nyirbator moest Unilever een nieuw plaatselijk, groter magazijn in gebruik nemen. Mét een nieuw warehousemanagementsysteem (wms). De keuze van het levensmiddelenconcern viel wederom op...
Lyvia Group neemt Facility Kwadraat over [Computable]
De Zweedse Lyvia Group pleegt zijn eerste overname in Nederland: Facility Kwadraat. Dit bedrijf uit Den Bosch levert software-as-a-service (saas) voor facility management, meerjarenonderhoud, huurbeheer en vastgoedbeheer.
Adoptie van generatieve ai verloopt traag [Computable]
Ondanks de grote belangstelling maakt een meerderheid van de grote ondernemingen nog geen gebruik van generatieve ai (gen-ai) zoals ChatGPT. Vooral de infrastructuur vormt een barrière bij de implementatie van de grote taalmodellen (llm's) die aan...
ASM steekt 300 miljoen in Amerikaanse expansie [Computable]
ASM, de toeleverancier van de chipindustrie die tot voor kort ASM International heette, gaat de komende vijf jaar driehonderd miljoen dollar investeren in de uitbreiding van zijn Amerikaanse operaties. De vestiging in Arizona wordt flink uitgebreid.
Google met Gemini heel dicht bij OpenAI [Computable]
Met de lancering van Gemini, het grootste en meest ingenieuze artificiële intelligentie (ai)-taalmodel van Google, doet het techbedrijf een aanval op de leidende positie van OpenAI’s GPT-4. Volgens ai-experts is het verschil tussen beide grote taalmodellen...
Hack Booking.com stelt reissector voor uitdaging [Computable]
De recente hack gericht op Booking.com zegt alles over de impact van cybercriminaliteit op de hotel- en reissector. Bij de oplichting werden de gegevens van klanten gestolen en te koop aangeboden op het darkweb. Hierbij werden...
Van Oord brengt klimaatrisico's in kaart [Computable]
Van Oord heeft een opensourcetool ontwikkeld die inzicht moet geven in de klimaatverandering en risico’s die daarmee gepaard gaan. Het bagger- en waterbouwbedrijf wil met die software die meerdere datalagen combineert, wereldwijd kustgebieden en ecosystemen in...
Django Authentication Video Tutorial [Simple is Better Than Complex]
In this tutorial series, we are going to explore Django’s authentication system by implementing sign up, login, logout, password change, password reset and protected views from non-authenticated users. This tutorial is organized in 8 videos, one for each topic, ranging from 4 min to 15 min each.
Starting a Django project from scratch, creating a virtual environment and an initial Django app. After that, we are going to setup the templates and create an initial view to start working on the authentication.
If you are already familiar with Django, you can skip this video and jump to the Sign Up tutorial below.
First thing we are going to do is implement a sign up view using the built-in UserCreationForm
. In this video you
are also going to get some insights on basic Django form processing.
In this video tutorial we are going to first include the built-in Django auth URLs to our project and proceed to implement the login view.
In this tutorial we are going to include Django logout and also start playing with conditional templates, displaying different content depending if the user is authenticated or not.
Next The password change is a view where an authenticated user can change their password.
This tutorial is perhaps the most complicated one, because it involves several views and also sending emails. In this video tutorial you are going to learn how to use the default implementation of the password reset process and how to change the email messages.
After implementing the whole authentication system, this video gives you an overview on how to protect some views from
non authenticated users by using the @login_required
decorator and also using class-based views mixins.
Extra video showing how to integrate Django with Bootstrap 4 and how to use Django Crispy Forms to render Bootstrap forms properly. This video also include some general advices and tips about using Bootstrap 4.
If you want to learn more about Django authentication and some extra stuff related to it, like how to use Bootstrap to make your auth forms look good, or how to write unit tests for your auth-related views, you can read the forth part of my beginners guide to Django: A Complete Beginner’s Guide to Django - Part 4 - Authentication.
Of course the official documentation is the best source of information: Using the Django authentication system
The code used in this tutorial: github.com/sibtc/django-auth-tutorial-example
This was my first time recording this kind of content, so your feedback is highly appreciated. Please let me know what you think!
And don’t forget to subscribe to my YouTube channel! I will post exclusive Django tutorials there. So stay tuned! :-)
What You Should Know About The Django User Model [Simple is Better Than Complex]
The goal of this article is to discuss the caveats of the default Django user model implementation and also to give you some advice on how to address them. It is important to know the limitations of the current implementation so to avoid the most common pitfalls.
Something to keep in mind is that the Django user model is heavily based on its initial implementation that is at least 16 years old. Because user and authentication is a core part of the majority of the web applications using Django, most of its quirks persisted on the subsequent releases so to maintain backward compatibility.
The good news is that Django offers many ways to override and customize its default implementation so to fit your application needs. But some of those changes must be done right at the beginning of the project, otherwise it would be too much of a hassle to change the database structure after your application is in production.
Below, the topics that we are going to cover in this article:
First, let’s explore the caveats and next we discuss the options.
Even though the username
field is marked as unique, by default it is not case-sensitive. That means the username
john.doe
and John.doe
identifies two different users in your application.
This can be a security issue if your application has social aspects that builds around the username
providing a
public URL to a profile like Twitter, Instagram or GitHub for example.
It also delivers a poor user experience because people doesn’t expect that john.doe
is a different username than
John.Doe
, and if the user does not type the username exactly in the same way when they created their account, they
might be unable to log in to your application.
Possible Solutions:
CharField
with the CICharField
instead (which is case-insensitive)get_by_natural_key
from the UserManager
to query the database using iexact
ModelBackend
implementationThis is not necessarily an issue, but it is important for you to understand what that means and what are the effects.
By default the username field accepts letters, numbers and the characters: @
, .
, +
, -
, and _
.
The catch here is on which letters it accepts.
For example, joão
would be a valid username. Similarly, Джон
or 約翰
would also be a valid username.
Django ships with two username validators: ASCIIUsernameValidator
and UnicodeUsernameValidator
. If the intended
behavior is to only accept letters from A-Z, you may want to switch the username validator to use ASCII letters only
by using the ASCIIUsernameValidator
.
Possible Solutions:
ASCIIUsernameValidator
Multiple users can have the same email address associated with their account.
By default the email is used to recover a password. If there is more than one user with the same email address, the password reset will be initiated for all accounts and the user will receive an email for each active account.
It also may not be an issue but this will certainly make it impossible to offer the option to authenticate the user using the email address (like those sites that allow you to login with username or email address).
Possible Solutions:
AbstractBaseUser
to define the email field from scratchBy default the email field does not allow null
, however it allow blank
values, so it pretty much allows users to
not inform a email address.
Also, this may not be an issue for your application. But if you intend to allow users to log in with email it may be a good idea to enforce the registration of this field.
When using the built-in resources like user creation forms or when using model forms you need to pay attention to this detail if the desired behavior is to always have the user email.
Possible Solutions:
AbstractBaseUser
to define the email field from scratchThere is a small catch on the user creation process that if the set_password
method is called passing None
as a
parameter, it will produce an unusable password. And that also means that the user will be unable to start a password
reset to set the first password.
You can end up in that situation if you are using social networks like Facebook or Twitter to allow the user to create an account on your website.
Another way of ending up in this situation is simply by creating a user using the User.objects.create_user()
or
User.objects.create_superuser()
without providing an initial password.
Possible Solutions:
Changing the user model is something you want to do early on. After your database schema is generated and your database is populated it will be very tricky to swap the user model.
The reason why is that you are likely going to have some foreign key created referencing the user table, also Django internal tables will create hard references to the user table. And if you plan to change that later on you will need to change and migrate the database by yourself.
Possible Solutions:
AbstractUser
and change a single configuration on the
settings module. This will give you a tremendous freedom and it will make things way easier in the future should the
requirements change.To address the limitations we discussed in this article we have two options: (1) implement workarounds to fix the behavior of the default user model; (2) replace the default user model altogether and fix the issues for good.
What is going to dictate what approach you need to use is in what stage your project currently is.
django.contrib.auth.models.User
, go
with the first solution implementing the workarounds;First let’s have a look on a few workarounds that you can implement if you project is already in production. Keep in
mind that those solutions assume that you don’t have direct access to the User model, that is, you are currently using
the default User model importing it from django.contrib.auth.models
.
If you did replace the User model, then jump to the next section to get better tips on how to fix the issues.
Before making any changes you need to make sure you don’t have conflicting usernames on your database. For example,
if you have a User with the username maria
and another with the username Maria
you have to plan a data migration
first. It is difficult to tell you what to do because it really depends on how you want to handle it. One option is
to append some digits after the username, but that can disturb the user experience.
Now let’s say you checked your database and there are no conflicting usernames and you are good to go.
First thing you need to do is to protect your sign up forms to not allow conflicting usernames to create accounts.
Then on your user creation form, used to sign up, you could validate the username like this:
If you are handling user creation in a rest API using DRF, you can do something similar in your serializer:
In the previous example the mentioned ValidationError
is the one defined in the DRF.
The iexact
notation on the queryset parameter will query the database ignoring the case.
Now that the user creation is sanitized we can proceed to define a custom authentication backend.
Create a module named backends.py anywhere in your project and add the following snippet:
backends.py
Now switch the authentication backend in the settings.py module:
settings.py
Please note that 'mysite.core.backends.CaseInsensitiveModelBackend'
must be changed to the valid path, where you
created the backends.py module.
It is important to have handled all conflicting users before changing the authentication backend because otherwise it
could raise a 500 exception MultipleObjectsReturned
.
Here we can borrow the built-in UsernameField
and customize it to append the ASCIIUsernameValidator
to the list of
validators:
Then on the Meta
of your User creation form you can replace the form field class:
Here all you can do is to sanitize and handle the user input in all views where you user can modify its email address.
You have to include the email field on your sign up form/serializer as well.
Then just make it mandatory like this:
You can also check a complete and detailed example of this form on the project shared together with this post: userworkarounds
Now I’m going to show you how I usually like to extend and replace the default User model. It is a little bit verbose but that is the strategy that will allow you to access all the inner parts of the User model and make it better.
To replace the User model you have two options: extending the AbstractBaseUser
or extending the AbstractUser
.
To illustrate what that means I draw the following diagram of how the default Django model is implemented:
The green circle identified with the label User
is actually the one you import from django.contrib.auth.models
and
that is the implementation that we discussed in this article.
If you look at the source code, its implementation looks like this:
So basically it is just an implementation of the AbstractUser
. Meaning all the fields and logic are implemented in the
abstract class.
It is done that way so we can easily extend the User
model by creating a sub-class of the AbstractUser
and add other
features and fields you like.
But there is a limitation that you can’t override an existing model field. For example, you can re-define the email field to make it mandatory or to change its length.
So extending the AbstractUser
class is only useful when you want to modify its methods, add more fields or swap the
objects
manager.
If you want to remove a field or change how the field is defined, you have to extend the user model from the
AbstractBaseUser
.
The best strategy to have full control over the user model is creating a new concrete class from the PermissionsMixin
and the AbstractBaseUser
.
Note that the PermissionsMixin
is only necessary if you intend to use the Django admin or the built-in permissions
framework. If you are not planning to use it you can leave it out. And in the future if things change you can add
the mixin and migrate the model and you are ready to go.
So the implementation strategy looks like this:
Now I’m going to show you my go-to implementation. I always use PostgreSQL which, in my opinion, is the best database
to use with Django. At least it is the one with most support and features anyway. So I’m going to show an approach
that use the PostgreSQL’s CITextExtension
. Then I will show some options if you are using other database engines.
For this implementation I always create an app named accounts
:
Then before adding any code I like to create an empty migration to install the PostgreSQL extensions that we are going to use:
Inside the migrations
directory of the accounts
app you will find an empty migration called
0001_postgres_extensions.py
.
Modify the file to include the extension installation:
migrations/0001_postgres_extensions.py
Now let’s implement our model. Open the models.py
file inside the accounts
app.
I always grab the initial code directly from Django’s source on GitHub, copying the AbstractUser
implementation, and
modify it accordingly:
accounts/models.py
Let’s review what we changed here:
username_validator
to use ASCIIUsernameValidator
username
field now is using CICharField
which is not case-sensitiveemail
field is now mandatory, unique and is using CIEmailField
which is not case-sensitiveOn the settings module, add the following configuration:
settings.py
Now we are ready to create our migrations:
Apply the migrations:
And you should get a similar result if you are just creating your project and if there is no other models/apps:
If you check your database scheme you will see that there is no auth_user
table (which is the default one), and now
the user is stored on the table accounts_customuser
:
And all the Foreign Keys to the user model will be created pointing to this table. That’s why it is important to do it right in the beginning of your project, before you created the database scheme.
Now you have all the freedom. You can replace the first_name
and last_name
and use just one field called name
.
You could remove the username
field and identify your User model with the email
(then just make sure you change
the property USERNAME_FIELD
to email
).
You can grab the source code on GitHub: customuser
If you are not using PostgreSQL and want to implement case-insensitive authentication and you have direct access to the User model, a nice hack is to create a custom manager for the User model, like this:
accounts/models.py
Then you could also sanitize the username field on the clean()
method to always save it as lowercase so you don’t have
to bother having case variant/conflicting usernames:
In this tutorial we discussed a few caveats of the default User model implementation and presented a few options to address those issues.
The takeaway message here is: always replace the default User model.
If your project is already in production, don’t panic: there are ways to fix those issues following the recommendations in this post.
I also have two detailed blog posts on how to make the username field case-insensitive and other about how to extend the django user model:
You can also explore the source code presented in this post on GitHub:
How to Start a Production-Ready Django Project [Simple is Better Than Complex]
In this tutorial I’m going to show you how I usually start and organize a new Django project nowadays. I’ve tried many different configurations and ways to organize the project, but for the past 4 years or so this has been consistently my go-to setup.
Please note that this is not intended to be a “best practice” guide or to fit every use case. It’s just the way I like to use Django and that’s also the way that I found that allow your project to grow in healthy way.
Index
Usually those are the premises I take into account when setting up a project:
Usually I work with three environment dimensions in my code: local, tests and production. I like to see it
as a “mode” how I run the project. What dictates which mode I’m running the project is which settings.py
I’m currently
using.
The local dimension always come first. It is the settings and setup that a developer will use on their local machine.
All the defaults and configurations must be done to attend the local development environment first.
The reason why I like to do it that way is that the project must be as simple as possible for a new hire to clone the repository, run the project and start coding.
The production environment usually will be configured and maintained by experienced developers and by those who are more familiar with the code base itself. And because the deployment should be automated, there is no reason for people being re-creating the production server over and over again. So it is perfectly fine for the production setup require a few extra steps and configuration.
The tests environment will be also available locally, so developers can test the code and run the static checks.
But the idea of the tests environment is to expose it to a CI environment like Travis CI, Circle CI, AWS Code Pipeline, etc.
It is a simple setup that you can install the project and run all the unit tests.
The production dimension is the real deal. This is the environment that goes live without the testing and debugging utilities.
I also use this “mode” or dimension to run the staging server.
A staging server is where you roll out new features and bug fixes before applying to the production server.
The idea here is that your staging server should run in production mode, and the only difference is going to be your static/media server and database server. And this can be achieved just by changing the configuration to tell what is the database connection string for example.
But the main thing is that you should not have any conditional in your code that checks if it is the production or staging server. The project should run exactly in the same way as in production.
Right from the beginning it is a good idea to setup a remote version control service. My go-to option is Git on GitHub. Usually I create the remote repository first then clone it on my local machine to get started.
Let’s say our project is called simple
, after creating the repository on GitHub I will create a directory named
simple
on my local machine, then within the simple
directory I will clone the repository, like shown on the
structure below:
Then I create the virtualenv
outside of the Git repository:
Then alongside the simple
and venv
directories I may place some other support files related to the project which I
do not plan to commit to the Git repository.
The reason I do that is because it is more convenient to destroy and re-create/re-clone both the virtual environment or the repository itself.
It is also good to store your virtual environment outside of the git repository/project root so you don’t need to bother ignoring its path when using libs like flake8, isort, black, tox, etc.
You can also use tools like virtualenvwrapper
to manage your virtual environments, but I prefer doing it that way
because everything is in one place. And if I no longer need to keep a given project on my local machine, I can delete
it completely without leaving behind anything related to the project on my machine.
The next step is installing Django inside the virtualenv so we can use the django-admin
commands.
Inside the simple
directory (where the git repository was cloned) start a new project:
Attention to the .
in the end of the command. It is necessary to not create yet another directory called simple
.
So now the structure should be something like this:
At this point I already complement the project package directory with three extra directories for templates
, static
and locale
.
Both templates
and static
we are going to manage at a project-level and app-level. Those are refer to the global
templates and static files.
The locale
is necessary in case you are using i18n
to translate your application to other languages. So here
is where you are going to store the .mo
and .po
files.
So the structure now should be something like this:
Inside the project root (2) I like to create a directory called requirements
with all the .txt
files, breaking down
the project dependencies like this:
base.txt
: Main dependencies, strictly necessary to make the project run. Common to all environmentstests.txt
: Inherits from base.txt
+ test utilitieslocal.txt
: Inherits from tests.txt
+ development utilitiesproduction.txt
: Inherits from base.txt
+ production only dependenciesNote that I do not have a staging.txt
requirements file, that’s because the staging environment is going to use the
production.txt
requirements so we have an exact copy of the production environment.
Now let’s have a look inside each of those requirements file and what are the python libraries that I always use no matter what type of Django project I’m developing.
base.txt
.env
files in a safe waysettings.py
module. It also helps with decoupling configuration from source codetests.txt
The -r base.txt
inherits all the requirements defined in the base.txt
file
local.txt
The -r tests.txt
inherits all the requirements defined in the base.txt
and tests.txt
file
production.txt
The -r base.txt
inherits all the requirements defined in the base.txt
file
Also following the environments and modes premise I like to setup multiple settings modules. Those are going to serve as the entry point to determine in which mode I’m running the project.
Inside the simple
project package, I create a new directory called settings
and break down the files like this:
Note that I removed the settings.py
that used to live inside the simple/ (3)
directory.
The majority of the code will live inside the base.py
settings module.
Everything that we can set only once in the base.py
and change its value using python-decouple
we should keep in the
base.py
and never repeat/override in the other settings modules.
After the removal of the main settings.py
a nice touch is to modify the manage.py
file to set the
local.py
as the default settings module so we can still run commands like python manage.py runserver
without any
further parameters:
manage.py
Now let’s have a look on each of those settings modules.
base.py
A few comments on the overall base settings file contents:
config()
are from the python-decouple
library. It is exposing the configuration to an environment variable and
retrieving its value accordingly to the expected data type. Read more about python-decouple
on this guide:
How to Use Python DecoupleSECRET_KEY
, DEBUG
and ALLOWED_HOSTS
defaults to local/development environment values.
That means a new developer won’t need to set a local .env
and provide some initial value to run locallydj_database_url
to translate this one line string to a Python
dictionary as Django expectsMEDIA_ROOT
we are navigating two directories up to create a media
directory outside the git
repository but inside our project workspace (inside the directory simple/ (1)
). So everything is handy and we won’t
be committing test uploads to our repositorybase.py
settings I reserve two blocks for third-party Django libraries that I may install, such
as Django Rest Framework or Django Crispy Forms. And the first-party settings refer to custom settings that I may create
exclusively for our project. Usually I will prefix them with the project name, like SIMPLE_XXX
local.py
Here is where I will setup Django Debug Toolbar for example. Or set the email backend to display the sent emails on console instead of having to setup a valid email server to work on the project.
All the code that is only relevant for the development process goes here.
You can use it to setup other libs like Django Silk to run profiling without exposing it to production.
tests.py
Here I add configurations that help us run the test cases faster. Sometimes disabling the migrations may not work if you have interdependencies between the apps models so Django may fail to create a database without the migrations.
In some projects it is better to keep the test database after the execution.
production.py
The most important part here on the production settings is to enable all the security settings Django offer. I like to do it that way because you can’t run the development server with most of those configurations on.
The other thing is the Sentry configuration.
Note the simple.__version__
on the release. Next we are going to explore how I usually manage the version of the
project.
I like to reuse Django’s get_version
utility for a simple and PEP 440 complaint version identification.
Inside the project’s __init__.py
module:
You can do something like this:
The only down side of using the get_version
directly from the Django module is that it won’t be able to resolve the
git hash for alpha versions.
A possible solution is making a copy of the django/utils/version.py
file to your project, and then you import it
locally, so it will be able to identify your git repository within the project folder.
But it also depends what kind of versioning you are using for your project. If the version of your project is not really relevant to the end user and you want to keep track of it for internal management like to identify the release on a Sentry issue, you could use a date-based release versioning.
A Django app is a Python package that you “install” using the INSTALLED_APPS
in your settings file. An app can live pretty
much anywhere: inside or outside the project package or even in a library that you installed using pip
.
Indeed, your Django apps may be reusable on other projects. But that doesn’t mean it should. Don’t let it destroy your project design or don’t get obsessed over it. Also, it shouldn’t necessarily represent a “part” of your website/web application.
It is perfectly fine for some apps to not have models, or other apps have only views. Some of your modules doesn’t even need to be a Django app at all. I like to see my Django projects as a big Python package and organize it in a way that makes sense, and not try to place everything inside reusable apps.
The general recommendation of the official Django documentation is to place your apps in the project root (alongside
the manage.py file, identified here in this tutorial by the simple/ (2)
folder).
But actually I prefer to create my apps inside the project package (identified in this tutorial by the simple/ (3)
folder). I create a module named apps
and then inside the apps
I create my Django apps. The main reason why is that
it creates a nice namespace for the app. It helps you easily identify that a particular import is part of your
project. Also this namespace helps when creating logging rules to handle events in a different way.
Here is an example of how I do it:
In the example above the folders accounts/
and core/
are Django apps created with the command django-admin startapp
.
Those two apps are also always in my project. The accounts
app is the one that I use the replace the default Django
User
model and also the place where I eventually create password reset, account activation, sign ups, etc.
The core
app I use for general/global implementations. For example to define a model that will be used across most
of the other apps. I try to keep it decoupled from other apps, not importing other apps resources. It usually is a good
place to implement general purpose or reusable views and mixins.
Something to pay attention when using this approach is that you need to change the name
of the apps configuration,
inside the apps.py
file of the Django app:
accounts/apps.py
You should rename it like this, to respect the namespace:
Then on your INSTALLED_APPS
you are going to create a reference to your models like this:
The namespace also helps to organize your INSTALLED_APPS
making your project apps easily recognizable.
This is what my app structure looks like:
The first thing I do is create a folder named tests
so I can break down my tests into several files. I always add a
factories.py
to create my model factories using the factory-boy
library.
For both static
and templates
always create first a directory with the same name as the app to avoid name collisions
when Django collect all static files and try to resolve the templates.
The admin.py
may be there or not depending if I’m using the Django Admin contrib app.
Other common modules that you may have is a utils.py
, forms.py
, managers.py
, services.py
etc.
Now I’m going to show you the configuration that I use for tools like isort
, black
, flake8
, coverage
and tox
.
The .editorconfig
file is a standard recognized by all major IDEs and code editors. It helps the editor understand
what is the file formatting rules used in the project.
It tells the editor if the project is indented with tabs or spaces. How many spaces/tabs. What’s the max length for a line of code.
I like to use Django’s .editorconfig
file. Here is what it looks like:
.editorconfig
Flake8 is a Python library that wraps PyFlakes, pycodestyle and Ned Batchelder’s McCabe script. It is a great toolkit for checking your code base against coding style (PEP8), programming errors (like “library imported but unused” and “Undefined name”) and to check cyclomatic complexity.
To learn more about flake8, check this tutorial I posted a while a go: How to Use Flake8.
setup.cfg
isort is a Python utility / library to sort imports alphabetically, and automatically separated into sections.
To learn more about isort, check this tutorial I posted a while a go: How to Use Python isort Library.
setup.cfg
Pay attention to the known_first_party
, it should be the name of your project so isort can group your project’s
imports.
Black is a life changing library to auto-format your Python applications. There is no way I’m coding with Python nowadays without using Black.
Here is the basic configuration that I use:
pyproject.toml
In this tutorial I described my go-to project setup when working with Django. That’s pretty much how I start all my projects nowadays.
Here is the final project structure for reference:
You can also explore the code on GitHub: django-production-template.
Zo installeer je Chrome OS op je (oude) computer [Laatste Artikelen - Webwereld]
Google timmert al jaren hard aan de weg met Chrome OS en brengt samen met verschillende computerfabrikanten Chrome-apparaten uit met dat besturingssysteem. Maar je hoeft niet per se een dedicated apparaat aan te schaffen, je kan het systeem ook zelf op je (oude) computer zetten en wij laten je zien hoe.
How to Use Chart.js with Django [Simple is Better Than Complex]
Chart.js is a cool open source JavaScript library that helps you render HTML5 charts. It is responsive and counts with 8 different chart types.
In this tutorial we are going to explore a little bit of how to make Django talk with Chart.js and render some simple charts based on data extracted from our models.
For this tutorial all you are going to do is add the Chart.js lib to your HTML page:
You can download it from Chart.js official website and use it locally, or you can use it from a CDN using the URL above.
I’m going to use the same example I used for the tutorial How to Create Group By Queries With Django ORM which is a good complement to this tutorial because actually the tricky part of working with charts is to transform the data so it can fit in a bar chart / line chart / etc.
We are going to use the two models below, Country
and City
:
And the raw data stored in the database:
cities | |||
---|---|---|---|
id | name | country_id | population |
1 | Tokyo | 28 | 36,923,000 |
2 | Shanghai | 13 | 34,000,000 |
3 | Jakarta | 19 | 30,000,000 |
4 | Seoul | 21 | 25,514,000 |
5 | Guangzhou | 13 | 25,000,000 |
6 | Beijing | 13 | 24,900,000 |
7 | Karachi | 22 | 24,300,000 |
8 | Shenzhen | 13 | 23,300,000 |
9 | Delhi | 25 | 21,753,486 |
10 | Mexico City | 24 | 21,339,781 |
11 | Lagos | 9 | 21,000,000 |
12 | São Paulo | 1 | 20,935,204 |
13 | Mumbai | 25 | 20,748,395 |
14 | New York City | 20 | 20,092,883 |
15 | Osaka | 28 | 19,342,000 |
16 | Wuhan | 13 | 19,000,000 |
17 | Chengdu | 13 | 18,100,000 |
18 | Dhaka | 4 | 17,151,925 |
19 | Chongqing | 13 | 17,000,000 |
20 | Tianjin | 13 | 15,400,000 |
21 | Kolkata | 25 | 14,617,882 |
22 | Tehran | 11 | 14,595,904 |
23 | Istanbul | 2 | 14,377,018 |
24 | London | 26 | 14,031,830 |
25 | Hangzhou | 13 | 13,400,000 |
26 | Los Angeles | 20 | 13,262,220 |
27 | Buenos Aires | 8 | 13,074,000 |
28 | Xi'an | 13 | 12,900,000 |
29 | Paris | 6 | 12,405,426 |
30 | Changzhou | 13 | 12,400,000 |
31 | Shantou | 13 | 12,000,000 |
32 | Rio de Janeiro | 1 | 11,973,505 |
33 | Manila | 18 | 11,855,975 |
34 | Nanjing | 13 | 11,700,000 |
35 | Rhine-Ruhr | 16 | 11,470,000 |
36 | Jinan | 13 | 11,000,000 |
37 | Bangalore | 25 | 10,576,167 |
38 | Harbin | 13 | 10,500,000 |
39 | Lima | 7 | 9,886,647 |
40 | Zhengzhou | 13 | 9,700,000 |
41 | Qingdao | 13 | 9,600,000 |
42 | Chicago | 20 | 9,554,598 |
43 | Nagoya | 28 | 9,107,000 |
44 | Chennai | 25 | 8,917,749 |
45 | Bangkok | 15 | 8,305,218 |
46 | Bogotá | 27 | 7,878,783 |
47 | Hyderabad | 25 | 7,749,334 |
48 | Shenyang | 13 | 7,700,000 |
49 | Wenzhou | 13 | 7,600,000 |
50 | Nanchang | 13 | 7,400,000 |
51 | Hong Kong | 13 | 7,298,600 |
52 | Taipei | 29 | 7,045,488 |
53 | Dallas–Fort Worth | 20 | 6,954,330 |
54 | Santiago | 14 | 6,683,852 |
55 | Luanda | 23 | 6,542,944 |
56 | Houston | 20 | 6,490,180 |
57 | Madrid | 17 | 6,378,297 |
58 | Ahmedabad | 25 | 6,352,254 |
59 | Toronto | 5 | 6,055,724 |
60 | Philadelphia | 20 | 6,051,170 |
61 | Washington, D.C. | 20 | 6,033,737 |
62 | Miami | 20 | 5,929,819 |
63 | Belo Horizonte | 1 | 5,767,414 |
64 | Atlanta | 20 | 5,614,323 |
65 | Singapore | 12 | 5,535,000 |
66 | Barcelona | 17 | 5,445,616 |
67 | Munich | 16 | 5,203,738 |
68 | Stuttgart | 16 | 5,200,000 |
69 | Ankara | 2 | 5,150,072 |
70 | Hamburg | 16 | 5,100,000 |
71 | Pune | 25 | 5,049,968 |
72 | Berlin | 16 | 5,005,216 |
73 | Guadalajara | 24 | 4,796,050 |
74 | Boston | 20 | 4,732,161 |
75 | Sydney | 10 | 5,000,500 |
76 | San Francisco | 20 | 4,594,060 |
77 | Surat | 25 | 4,585,367 |
78 | Phoenix | 20 | 4,489,109 |
79 | Monterrey | 24 | 4,477,614 |
80 | Inland Empire | 20 | 4,441,890 |
81 | Rome | 3 | 4,321,244 |
82 | Detroit | 20 | 4,296,611 |
83 | Milan | 3 | 4,267,946 |
84 | Melbourne | 10 | 4,650,000 |
countries | |
---|---|
id | name |
1 | Brazil |
2 | Turkey |
3 | Italy |
4 | Bangladesh |
5 | Canada |
6 | France |
7 | Peru |
8 | Argentina |
9 | Nigeria |
10 | Australia |
11 | Iran |
12 | Singapore |
13 | China |
14 | Chile |
15 | Thailand |
16 | Germany |
17 | Spain |
18 | Philippines |
19 | Indonesia |
20 | United States |
21 | South Korea |
22 | Pakistan |
23 | Angola |
24 | Mexico |
25 | India |
26 | United Kingdom |
27 | Colombia |
28 | Japan |
29 | Taiwan |
For the first example we are only going to retrieve the top 5 most populous cities and render it as a pie chart. In this strategy we are going to return the chart data as part of the view context and inject the results in the JavaScript code using the Django Template language.
views.py
Basically in the view above we are iterating through the City
queryset and building a list of labels
and a list of
data
. Here in this case the data
is the population count saved in the City
model.
For the urls.py
just a simple routing:
urls.py
Now the template. I got a basic snippet from the Chart.js Pie Chart Documentation.
pie_chart.html
In the example above the base.html
template is not important but you can see it in the code example I shared in the
end of this post.
This strategy is not ideal but works fine. The bad thing is that we are using the Django Template Language to interfere
with the JavaScript logic. When we put {{ data|safe}}
we are injecting a variable that came from
the server directly in the JavaScript code.
The code above looks like this:
As the title says, we are now going to render a bar chart using an async call.
views.py
So here we are using two views. The home
view would be the main page where the chart would be loaded at. The other
view population_chart
would be the one with the sole responsibility to aggregate the data the return a JSON response
with the labels and data.
If you are wondering about what this queryset is doing, it is grouping the cities by the country and aggregating the total population of each country. The result is going to be a list of country + total population. To learn more about this kind of query have a look on this post: How to Create Group By Queries With Django ORM
urls.py
home.html
Now we have a better separation of concerns. Looking at the chart container:
We added a reference to the URL that holds the chart rendering logic. Later on we are using it to execute the Ajax call.
Inside the success
callback we then finally execute the Chart.js related code using the JsonResponse
data.
I hope this tutorial helped you to get started with working with charts using Chart.js. I published another tutorial on the same subject a while ago but using the Highcharts library. The approach is pretty much the same: How to Integrate Highcharts.js with Django.
If you want to grab the code I used in this tutorial you can find it here: github.com/sibtc/django-chartjs-example.
How to Save Extra Data to a Django REST Framework Serializer [Simple is Better Than Complex]
In this tutorial you are going to learn how to pass extra data to your serializer, before saving it to the database.
When using regular Django forms, there is this common pattern where we save the form with commit=False
and then pass
some extra data to the instance before saving it to the database, like this:
This is very useful because we can save the required information using only one database query and it also make it possible to handle not nullable columns that was not defined in the form.
To simulate this pattern using a Django REST Framework serializer you can do something like this:
You can also pass several parameters at once:
In this example I created an app named core
.
models.py
serializers.py
views.py
Very similar example, using the same models.py and serializers.py as in the previous example.
views.py
How to Use Date Picker with Django [Simple is Better Than Complex]
In this tutorial we are going to explore three date/datetime pickers options that you can easily use in a Django project. We are going to explore how to do it manually first, then how to set up a custom widget and finally how to use a third-party Django app with support to datetime pickers.
The implementation of a date picker is mostly done on the front-end.
The key part of the implementation is to assure Django will receive the date input value in the correct format, and also that Django will be able to reproduce the format when rendering a form with initial data.
We can also use custom widgets to provide a deeper integration between the front-end and back-end and also to promote better reuse throughout a project.
In the next sections we are going to explore following date pickers:
Tempus Dominus Bootstrap 4 Docs Source
XDSoft DateTimePicker Docs Source
Fengyuan Chen’s Datepicker Docs Source
This is a great JavaScript library and it integrate well with Bootstrap 4. The downside is that it requires moment.js and sort of need Font-Awesome for the icons.
It only make sense to use this library with you are already using Bootstrap 4 + jQuery, otherwise the list of CSS and JS may look a little bit overwhelming.
To install it you can use their CDN or download the latest release from their GitHub Releases page.
If you downloaded the code from the releases page, grab the processed code from the build/ folder.
Below, a static HTML example of the datepicker:
The challenge now is to have this input snippet integrated with a Django form.
forms.py
template
The script tag can be placed anywhere because the snippet $(function () { ... });
will run the datetimepicker
initialization when the page is ready. The only requirement is that this script tag is placed after the jQuery script
tag.
You can create the widget in any app you want, here I’m going to consider we have a Django app named core.
core/widgets.py
In the implementation above we generate a unique ID datetimepicker_id
and also include it in the widget context.
Then the front-end implementation is done inside the widget HTML snippet.
widgets/bootstrap_datetimepicker.html
Note how we make use of the built-in django/forms/widgets/input.html
template.
Now the usage:
core/forms.py
Now simply render the field:
template
The good thing about having the widget is that your form could have several date fields using the widget and you could simply render the whole form like:
The XDSoft DateTimePicker is a very versatile date picker and doesn’t rely on moment.js or Bootstrap, although it looks good in a Bootstrap website.
It is easy to use and it is very straightforward.
You can download the source from GitHub releases page.
Below, a static example so you can see the minimum requirements and how all the pieces come together:
A basic integration with Django would look like this:
forms.py
Simple form, default widget, nothing special.
Now using it on the template:
template
The id_date
is the default ID Django generates for the form fields (id_
+ name
).
core/widgets.py
widgets/xdsoft_datetimepicker.html
To have a more generic implementation, this time we are selecting the field to initialize the component using its name instead of its id, should the user change the id prefix.
Now the usage:
core/forms.py
template
This is a very beautiful and minimalist date picker. Unfortunately there is no time support. But if you only need dates this is a great choice.
To install this datepicker you can either use their CDN or download the sources from their GitHub releases page. Please note that they do not provide a compiled/processed JavaScript files. But you can download those to your local machine using the CDN.
A basic integration with Django (note that we are now using DateField
instead of DateTimeField
):
forms.py
template
core/widgets.py
widgets/fengyuanchen_datepicker.html
Usage:
core/forms.py
template
The implementation is very similar no matter what date/datetime picker you are using. Hopefully this tutorial provided some insights on how to integrate this kind of frontend library to a Django project.
As always, the best source of information about each of those libraries are their official documentation.
I also created an example project to show the usage and implementation of the widgets for each of the libraries presented in this tutorial. Grab the source code at github.com/sibtc/django-datetimepicker-example.
How to Implement Grouped Model Choice Field [Simple is Better Than Complex]
The Django forms API have two field types to work with multiple options: ChoiceField
and ModelChoiceField
.
Both use select input as the default widget and they work in a similar way, except that ModelChoiceField
is designed
to handle QuerySets and work with foreign key relationships.
A basic implementation using a ChoiceField
would be:
You can also organize the choices in groups to generate the <optgroup>
tags like this:
When you are using a ModelChoiceField
unfortunately there is no built-in solution.
Recently I found a nice solution on Django’s ticket tracker, where
someone proposed adding an opt_group
argument to the ModelChoiceField
.
While the discussion is still ongoing, Simon Charette proposed a really good solution.
Let’s see how we can integrate it in our project.
First consider the following models:
models.py
So now our category instead of being a regular choices field it is now a model and the Expense
model have a
relationship with it using a foreign key.
If we create a ModelForm
using this model, the result will be very similar to our first example.
To simulate a grouped categories you will need the code below. First create a new module named fields.py:
fields.py
And here is how you use it in your forms:
forms.py
Because in the example above I used a self-referencing relationship I had to add the exclude(parent=None)
to hide
the “group categories” from showing up in the select input as a valid option.
You can download the code used in this tutorial from GitHub: github.com/sibtc/django-grouped-choice-field-example
Credits to the solution Simon Charette on Django Ticket Track.
How to Use JWT Authentication with Django REST Framework [Simple is Better Than Complex]
JWT stand for JSON Web Token and it is an authentication strategy used by client/server applications where the client is a Web application using JavaScript and some frontend framework like Angular, React or VueJS.
In this tutorial we are going to explore the specifics of JWT authentication. If you want to learn more about Token-based authentication using Django REST Framework (DRF), or if you want to know how to start a new DRF project you can read this tutorial: How to Implement Token Authentication using Django REST Framework. The concepts are the same, we are just going to switch the authentication backend.
The JWT is just an authorization token that should be included in all requests:
The JWT is acquired by exchanging an username + password for an access token and an refresh token.
The access token is usually short-lived (expires in 5 min or so, can be customized though).
The refresh token lives a little bit longer (expires in 24 hours, also customizable). It is comparable to an authentication session. After it expires, you need a full login with username + password again.
Why is that?
It’s a security feature and also it’s because the JWT holds a little bit more information. If you look closely the example I gave above, you will see the token is composed by three parts:
Those are three distinctive parts that compose a JWT:
So we have here:
This information is encoded using Base64. If we decode, we will see something like this:
header
payload
signature
The signature is issued by the JWT backend, using the header base64 + payload base64 + SECRET_KEY
. Upon each request
this signature is verified. If any information in the header or in the payload was changed by the client it will
invalidate the signature. The only way of checking and validating the signature is by using your application’s
SECRET_KEY
. Among other things, that’s why you should always keep your SECRET_KEY
secret!
For this tutorial we are going to use the djangorestframework_simplejwt
library, recommended by the DRF developers.
settings.py
urls.py
For this tutorial I will use the following route and API view:
views.py
urls.py
I will be using HTTPie to consume the API endpoints via the terminal. But you can also use cURL (readily available in many OS) to try things out locally.
Or alternatively, use the DRF web interface by accessing the endpoint URLs like this:
First step is to authenticate and obtain the token. The endpoint is /api/token/
and it only accepts POST requests.
So basically your response body is the two tokens:
After that you are going to store both the access token and the refresh token on the client side, usually in the localStorage.
In order to access the protected views on the backend (i.e., the API endpoints that require authentication), you should include the access token in the header of all requests, like this:
You can use this access token for the next five minutes.
After five min, the token will expire, and if you try to access the view again, you are going to get the following error:
To get a new access token, you should use the refresh token endpoint /api/token/refresh/
posting the
refresh token:
The return is a new access token that you should use in the subsequent requests.
The refresh token is valid for the next 24 hours. When it finally expires too, the user will need to perform a full authentication again using their username and password to get a new set of access token + refresh token.
At first glance the refresh token may look pointless, but in fact it is necessary to make sure the user still have the correct permissions. If your access token have a long expire time, it may take longer to update the information associated with the token. That’s because the authentication check is done by cryptographic means, instead of querying the database and verifying the data. So some information is sort of cached.
There is also a security aspect, in a sense that the refresh token only travel in the POST data. And the access token is sent via HTTP header, which may be logged along the way. So this also give a short window, should your access token be compromised.
This should cover the basics on the backend implementation. It’s worth checking the djangorestframework_simplejwt settings for further customization and to get a better idea of what the library offers.
The implementation on the frontend depends on what framework/library you are using. Some libraries and articles covering popular frontend frameworks like angular/react/vue.js:
The code used in this tutorial is available at github.com/sibtc/drf-jwt-example.
Advanced Form Rendering with Django Crispy Forms [Simple is Better Than Complex]
[Django 2.1.3 / Python 3.6.5 / Bootstrap 4.1.3]
In this tutorial we are going to explore some of the Django Crispy Forms features to handle advanced/custom forms rendering. This blog post started as a discussion in our community forum, so I decided to compile the insights and solutions in a blog post to benefit a wider audience.
Table of Contents
Throughout this tutorial we are going to implement the following Bootstrap 4 form using Django APIs:
This was taken from Bootstrap 4 official documentation as an example of how to use form rows.
NOTE!
The examples below refer to a base.html
template. Consider the code below:
base.html
Install it using pip:
Add it to your INSTALLED_APPS
and select which styles to use:
settings.py
For detailed instructions about how to install django-crispy-forms
, please refer to this tutorial:
How to Use Bootstrap 4 Forms With Django
The Python code required to represent the form above is the following:
In this case I’m using a regular Form
, but it could also be a ModelForm
based on a Django model with similar
fields. The state
field and the STATES
choices could be either a foreign key or anything else. Here I’m just using
a simple static example with three Brazilian states.
Template:
Rendered HTML:
Rendered HTML with validation state:
Same form code as in the example before.
Template:
Rendered HTML:
Rendered HTML with validation state:
Same form code as in the first example.
Template:
Rendered HTML:
Rendered HTML with validation state:
We could use the crispy forms layout helpers to achieve the same result as above. The implementation is done inside
the form __init__
method:
forms.py
The template implementation is very minimal:
The end result is the same.
Rendered HTML:
Rendered HTML with validation state:
You may also customize the field template and easily reuse throughout your application. Let’s say we want to use the custom Bootstrap 4 checkbox:
From the official documentation, the necessary HTML to output the input above:
Using the crispy forms API, we can create a new template for this custom field in our “templates” folder:
custom_checkbox.html
Now we can create a new crispy field, either in our forms.py module or in a new Python module named fields.py or something.
forms.py
We can use it now in our form definition:
forms.py
(PS: the AddressForm
was defined here and is the same as in the previous example.)
The end result:
There is much more Django Crispy Forms can do. Hopefully this tutorial gave you some extra insights on how to use the form helpers and layout classes. As always, the official documentation is the best source of information:
Django Crispy Forms layouts docs
Also, the code used in this tutorial is available on GitHub at github.com/sibtc/advanced-crispy-forms-examples.
How to Implement Token Authentication using Django REST Framework [Simple is Better Than Complex]
In this tutorial you are going to learn how to implement Token-based authentication using Django REST Framework (DRF). The token authentication works by exchanging username and password for a token that will be used in all subsequent requests so to identify the user on the server side.
The specifics of how the authentication is handled on the client side vary a lot depending on the technology/language/framework you are working with. The client could be a mobile application using iOS or Android. It could be a desktop application using Python or C++. It could be a Web application using PHP or Ruby.
But once you understand the overall process, it’s easier to find the necessary resources and documentation for your specific use case.
Token authentication is suitable for client-server applications, where the token is safely stored. You should never expose your token, as it would be (sort of) equivalent of a handing out your username and password.
Table of Contents
So let’s start from the very beginning. Install Django and DRF:
Create a new Django project:
Navigate to the myapi folder:
Start a new app. I will call my app core:
Here is what your project structure should look like:
Add the core app (you created) and the rest_framework app (you installed) to the INSTALLED_APPS
, inside the
settings.py module:
myapi/settings.py
Return to the project root (the folder where the manage.py script is), and migrate the database:
Let’s create our first API view just to test things out:
myapi/core/views.py
Now register a path in the urls.py module:
myapi/urls.py
So now we have an API with just one endpoint /hello/
that we can perform GET
requests. We can use the browser to
consume this endpoint, just by accessing the URL http://127.0.0.1:8000/hello/
:
We can also ask to receive the response as plain JSON data by passing the format
parameter in the querystring like
http://127.0.0.1:8000/hello/?format=json
:
Both methods are fine to try out a DRF API, but sometimes a command line tool is more handy as we can play more easily with the requests headers. You can use cURL, which is widely available on all major Linux/macOS distributions:
But usually I prefer to use HTTPie, which is a pretty awesome Python command line tool:
Now let’s protect this API endpoint so we can implement the token authentication:
myapi/core/views.py
Try again to access the API endpoint:
And now we get an HTTP 403 Forbidden error. Now let’s implement the token authentication so we can access this endpoint.
We need to add two pieces of information in our settings.py module. First include rest_framework.authtoken to
your INSTALLED_APPS
and include the TokenAuthentication
to REST_FRAMEWORK
:
myapi/settings.py
Migrate the database to create the table that will store the authentication tokens:
Now we need a user account. Let’s just create one using the manage.py
command line utility:
The easiest way to generate a token, just for testing purpose, is using the command line utility again:
This piece of information, the random string 9054f7aa9305e012b3c2300408c3dfdf390fcddf
is what we are going to use
next to authenticate.
But now that we have the TokenAuthentication
in place, let’s try to make another request to our /hello/
endpoint:
Notice how our API is now providing some extra information to the client on the required authentication method.
So finally, let’s use our token!
And that’s pretty much it. For now on, on all subsequent request you should include the header Authorization: Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf
.
The formatting looks weird and usually it is a point of confusion on how to set this header. It will depend on the client and how to set the HTTP request header.
For example, if we were using cURL, the command would be something like this:
Or if it was a Python requests call:
Or if we were using Angular, you could implement an HttpInterceptor
and set a header:
The DRF provide an endpoint for the users to request an authentication token using their username and password.
Include the following route to the urls.py module:
myapi/urls.py
So now we have a brand new API endpoint, which is /api-token-auth/
. Let’s first inspect it:
It doesn’t handle GET requests. Basically it’s just a view to receive a POST request with username and password.
Let’s try again:
The response body is the token associated with this particular user. After this point you store this token and apply it to the future requests.
Then, again, the way you are going to make the POST request to the API depends on the language/framework you are using.
If this was an Angular client, you could store the token in the localStorage
, if this was a Desktop CLI application
you could store in a text file in the user’s home directory in a dot file.
Hopefully this tutorial provided some insights on how the token authentication works. I will try to follow up this tutorial providing some concrete examples of Angular applications, command line applications and Web clients as well.
It is important to note that the default Token implementation has some limitations such as only one token per user, no built-in way to set an expiry date to the token.
You can grab the code used in this tutorial at github.com/sibtc/drf-token-auth-example.
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Python GUI applicatie consistent backups met fsarchiver [linux blogs franz ulenaers]
Een
partitie van het type = "Linux LVM" kan gebruikt worden
voor logische volumen maar ook als "snapshot"
!
Een snapshot kan een exact kopie zijn van een logische
volume dat bevrozen is op een bepaald ogenblik : dit maakt het
mogelijk om consistente backups te maken van logische
volumen
terwijl de logische volumen in gebruik zijn !
Mijn fysische en logische volumen zijn als volgt aangemaakt :
fysische volume
pvcreate /dev/sda1
fysische volume groep
vgcreate mydell /dev/sda1
logische volumen
lvcreate -L 1G -n boot mydell
lvcreate -L 100G -n data mydell
lvcreate -L 50G -n home mydell
lvcreate -L 50G -n root mydell
lvcreate -L 1G swap mydell
procedures MyCloud [linux blogs franz ulenaers]
Procedure lftpUlefr01Cloudupload wordt gebruikt om een upload te doen van bestanden en mappen naar MyCloud
Procedure lftpUlefr01Cloudmirror wordt gebruikt om wijzigingen terug te halen
Beide procedures maken gebruik van het programma lftp ( dit is "Sophisticated file transfer program" ) en worden gebruikt om synchronisatie van laptop en desktop toe te laten
Procedures werden aangepast zodat verborgen bestanden en verborgen mappen ook worden verwerkt ,
alsook werden voor mirror bepaalde meestal onveranderde bestanden en mappen uitgefilterd (--exclude) zodanig dat deze niet opnieuw worden verwerkt
op Cloud blijven ze bestaan als backup maar op de verschillende laptops niet (dit werd gedaan voor oudere mails van 2016 maanden 2016-11 en 2016-12
en voor alle vorige maanden (dit tot en met september) van 2017 !
zie bijlagen
python GUI applicatie tune2fs [linux blogs franz ulenaers]
Created woensdag 18 oktober 2017
geschreven met programmeertaal python met gebruik van Gtk+ 3
starten in terminal met : sudo python mytune2fs.py
ofwel python source compileren en starten met gecompileerde versie
Python GUI applicatie myarchive.py [linux blogs franz ulenaers]
Created vrijdag 13 oktober 2017
start in terminal mode met :
* sudo python myarchive.py
* sudo python myarchive2.py
ofwel door gecompileerde versie te maken en de gegeneerde objecten te starten
python myfsck.py [linux blogs franz ulenaers]
Created vrijdag 13 oktober 2017
zie bijgeleverd bestand myfsck.py
Deze applicatie kan devices mounten en umounten maar is hoofdzakelijk bedoeld om het fsck comando uit te voeren
Root rechten zijn nodig !
hulp ?
* starten in terminal mode
* sudo python myfsck.py
Maken dat een bestand niet te wijzigen , niet te hernoemen is niet te deleten is in linux ! [linux blogs franz ulenaers]
hoe : sudo chattr +i /data/Encrypt/.encfs6.xml
je kunt het bestand niet wijzigen, je kunt het bestand niet hernoemen, je kunt het bestand niet deleten zelfs als je root zijt
Backup laptop [linux blogs franz ulenaers]
Linken in Linux [linux blogs franz ulenaers]
Op Linux kan men bestanden meervoudige benamingen geven, zo kun je een bestand op verschillende plaatsen in de boomstructuur van de bestanden opslaan , zonder extra plaats op harde schijf in te nemen (+-).
Er zijn twee soorten links :
harde links
symbolische links
Een harde link maakt gebruik van hetzelfde bestandsnummer (inode).
Een harde link geldt niet voor een directory !
Een harde link moet op zelfde bestandssysteem en oorspronkelijk bestand moet bestaan !
Een symbolische link , het bestand krijgt een nieuw bestandsnummer , het bestand waarop verwezen wordt hoeft niet te bestaan.
Een symbolische link gaat ook voor een directory.
Het bestand linuxcursus is 4,2M groot, inode nr 293800.
Samsung Galaxy Z Flip, S20(+) en S20 Ultra Hands-on [Laatste Artikelen - Webwereld]
Samsung nodigde ons uit op de drie allernieuwste smartphones van dichtbij te bekijken. Daar maakten wij dankbaar gebruik van en wij delen onze bevindingen met je.
Hands-on: Synology Virtual Machine Manager [Laatste Artikelen - Webwereld]
Dat je NAS tegenwoordig voor veel meer dan alleen het opslaan van bestanden kan worden gebruikt is inmiddels bekend, maar wist je ook dat je er virtuele machines mee kan beheren? Wij leggen je uit hoe.
Wat je moet weten over FIDO-sleutels [Laatste Artikelen - Webwereld]
Dankzij de FIDO2-standaard is het mogelijk om zonder wachtwoord toch veilig in te loggen bij diverse online diensten. Onder meer Microsoft en Google bieden hier al opties voor. Dit jaar volgen er waarschijnlijk meer organisaties die dit aanbieden.
Zo gebruik je je iPhone zonder Apple ID [Laatste Artikelen - Webwereld]
Tegenwoordig moet je voor zo’n beetje alles wat je online wilt doen een account aanmaken, zelfs als je niet van plan bent online te werken of als je gewoon geen zin hebt om je gegevens te delen met de fabrikant. Wij laten je vandaag zien hoe je dat voor elkaar krijgt met je iPhone of iPad.
Groot lek in Internet Explorer wordt al misbruikt in het wild [Laatste Artikelen - Webwereld]
Er is een nieuwe zero-day-kwetsbaarheid ontdekt in Microsoft Internet Explorer. Het nieuwe lek wordt al misbruikt en een beveiligingsupdate is nog niet beschikbaar.
Zo installeer je Chrome-extensies in de nieuwe Edge [Laatste Artikelen - Webwereld]
De nieuwe versie van Edge is gebouwd met code van het Chromium-project, maar in de standaardconfiguratie worden extensies uitsluitend geïnstalleerd via de Microsoft Store. Dat is gelukkig vrij eenvoudig aan te passen.
Windows 10-upgrade nog steeds gratis [Laatste Artikelen - Webwereld]
Microsoft gaf gebruikers enkele jaren geleden de mogelijkheid gratis te upgraden van Windows 7 naar Windows 10. Daarbij ging het af en toe zo ver dat zelfs gebruikers die dat niet wilden een upgrade kregen. De aanbieding is al lang en breed voorbij, maar gratis upgraden is nog steeds mogelijk en het is nu makkelijker dan ooit. Wij vertellen je hoe je dat doet.
Chrome, Edge, Firefox: Welke browser is het snelst? [Laatste Artikelen - Webwereld]
Er is veel veranderd op de markt voor pc-browsers. Ongeveer vijf jaar geleden was er nog meer concurrentie en geheel eigen ontwikkeling, nu zijn er nog maar twee engines over: die achter Chrome en die achter Firefox. Met de release van de Blink-gebaseerde Edge van Microsoft deze maand kijken we naar benachmarks en praktijktests.
Cooler Master herontwerpt koelpasta-tubes wegens drugsverdenkingen [Laatste Artikelen - Webwereld]
Cooler Master heeft het uiterlijk van z’n koelpasta-spuiten aangepast omdat het bedrijf het naar eigen zeggen beu is om steeds te moeten uitleggen aan ouders dat de inhoud geen drugs is, maar koelpasta.
stick mounten zonder root , labels zetten , maak een bestandensysteem clean [ulefr01 - blog franz ulenaers]
Embedded Linux Engineer [Job Openings]
You're eager to work with Linux in an exciting environment. You have a lot of PC equipement experience. Prior experience with embedded Linux or small footprint distributions is considered a plus. Region East/West Flanders
We're looking for someone capable of teaching Linux and/or Solaris professionally. Ideally the candidate has experience with teaching in Linux, possibly other non-Windows OSes as well.
Kernel Developer [Job Openings]
We're looking for someone with kernel device driver developement experience. Preferably, but not necessary with knowledge of AV or TV devices.
C/C++ Developers [Job Openings]
We're searching Linux C/C++ Developers. Region Leuven.
Feed | RSS | Last fetched | Next fetched after |
---|---|---|---|
Computable | XML | 07-10-2024, 17:01 | 07-10-2024, 20:01 |
GNOMON | XML | 07-10-2024, 17:01 | 07-10-2024, 20:01 |
http://www.h-online.com/news/atom.xml | XML | 07-10-2024, 17:01 | 07-10-2024, 20:01 |
https://www.heise.de/en | XML | 07-10-2024, 17:01 | 07-10-2024, 20:01 |
Job Openings | XML | 07-10-2024, 17:01 | 07-10-2024, 20:01 |
Laatste Artikelen - Webwereld | XML | 07-10-2024, 17:01 | 07-10-2024, 20:01 |
linux blogs franz ulenaers | XML | 07-10-2024, 17:01 | 07-10-2024, 20:01 |
Linux Journal - The Original Magazine of the Linux Community | XML | 07-10-2024, 17:01 | 07-10-2024, 20:01 |
Linux Today | XML | 07-10-2024, 17:01 | 07-10-2024, 20:01 |
OMG! Ubuntu! | XML | 07-10-2024, 17:01 | 07-10-2024, 20:01 |
Planet Python | XML | 07-10-2024, 17:01 | 07-10-2024, 20:01 |
Press Releases Archives - The Document Foundation Blog | XML | 07-10-2024, 17:01 | 07-10-2024, 20:01 |
Simple is Better Than Complex | XML | 07-10-2024, 17:01 | 07-10-2024, 20:01 |
Slashdot: Linux | XML | 07-10-2024, 17:01 | 07-10-2024, 20:01 |
Tech Drive-in | XML | 07-10-2024, 17:01 | 07-10-2024, 20:01 |
ulefr01 - blog franz ulenaers | XML | 07-10-2024, 17:01 | 07-10-2024, 20:01 |