Planet Python
Last update: October 31, 2021 07:41 AM UTC
October 31, 2021
Zero to Mastery
Python Monthly Newsletter đ»đ October 2021
23rd issue of the Python Monthly Newsletter! Read by 20,000+ Python developers every month. This monthly Python newsletter covers the latest Python news so that you stay up-to-date with the industry and keep your skills sharp.
Podcast.__init__
Build Composable And Reusable Feature Engineering Pipelines with Feature-Engine
Every machine learning model has to start with feature engineering. This is the process of combining input variables into a more meaningful signal for the problem that you are trying to solve. Many times this process can lead to duplicating code from previous projects, or introducing technical debt in the form of poorly maintained feature pipelines. In order to make the practice more manageable Soledad Galli created the feature-engine library. In this episode she explains how it has helped her and others build reusable transformations that can be applied in a composable manner with your scikit-learn projects. She also discusses the importance of understanding the data that you are working with and the domain in which your model will be used to ensure that you are selecting the right features.
Summary
Every machine learning model has to start with feature engineering. This is the process of combining input variables into a more meaningful signal for the problem that you are trying to solve. Many times this process can lead to duplicating code from previous projects, or introducing technical debt in the form of poorly maintained feature pipelines. In order to make the practice more manageable Soledad Galli created the feature-engine library. In this episode she explains how it has helped her and others build reusable transformations that can be applied in a composable manner with your scikit-learn projects. She also discusses the importance of understanding the data that you are working with and the domain in which your model will be used to ensure that you are selecting the right features.
Announcements
- Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
- When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With the launch of their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
- Your host as usual is Tobias Macey and today I’m interviewing Soledad Galli about feature-engine, a Python library to engineer features for use in machine learning models
Interview
- Introductions
- How did you get introduced to Python?
- Can you describe what feature-engine is and the story behind it?
- What are the complexities that are inherent to feature engineering?
- What are the problems that are introduced due to incidental complexity and technical debt?
- What was missing in the available set of libraries/frameworks/toolkits for feature engineering that you are solving for with feature-engine?
- What are some examples of the types of domain knowledge that are needed to effectively build features for an ML model?
- Given the fact that features are constructed through methods such as normalizing data distributions, imputing missing values, combining attributes, etc. what are some of the potential risks that are introduced by incorrectly applied transformations or invalid assumptions about the impact of these manipulations?
- Can you describe how feature-engine is implemented?
- How have the design and goals of the project changed or evolved since you started working on it?
- What (if any) difference exists in the feature engineering process for frameworks like scikit-learn as compared to deep learning approaches using PyTorch, Tensorflow, etc.?
- Can you describe the workflow of identifying and generating useful features during model development?
- What are the tools that are available for testing and debugging of the feature pipelines?
- What do you see as the potential benefits or drawbacks of integrating feature-engine with a feature store such as Feast or Tecton?
- What are the most interesting, innovative, or unexpected ways that you have seen feature-engine used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on feature-engine?
- When is feature-engine the wrong choice?
- What do you have planned for the future of feature-engine?
Keep In Touch
- @Soledad_Galli on Twitter
- solegalli on GitHub
Picks
- Tobias
- Soledad
- The Social Dilemma
- Don’t Be Evil by Rana Foroohar
Closing Announcements
- Thank you for listening! Don’t forget to check out our other show, the Data Engineering Podcast for the latest on modern data management.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
- feature-engine
- Feature Engineering
- Python Feature Engineering Cookbook
- scikit-learn
- Feature Stores
- Pandas
- PyTorch
- Tensorflow
- Feast
- Tecton
- Kaggle
- Dask
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
October 30, 2021
ItsMyCode
Python JSONPath
ItsMyCode |
JSONPath is an expression language that is used to parse the JSON data in Python. JSONPath is similar to XPath in XML, where we parse the XML data.
JSONPath provides a simpler syntax to query JSON data and get the desired value in Python. Using JSONPath will be the more efficient way to parse and query JSON data as we don’t have to load the entire JSON data. This approach is more memory-optimized compared to any other way of querying JSON.
JSONPath Library in Python
There are many JSONPath libraries for Python, and the most popular one is the jsonpath-ng library. It’s written in the native Python language and supports both Python 2 and Python 3 versions.
jsonpath-ng is the final implementation of JSONPath for Python that aims for standard-compliant including arithmetic and binary comparison operators s, as defined in the original JSONPath proposal.
This packages merges both jsonpath-rw and jsonpath-rw-ext and provides several AST API enhancements, such as the ability to update or remove nodes in the tree.
Installing jsonpath-ng Module
To install jsonpath-ng library, use the below pip install command.
pip install --upgrade jsonpath-ng
The above command will install the latest version of the jsonpath-ng library on your machine. Once installed, you can import in the Python IDE using the below code.
import jsonpath_ng
Jsonpath operators:
Below are the list of operators you can use for getting json data values.
| Syntax | Meaning |
|---|---|
| jsonpath1 . jsonpath2 | All nodes matched by jsonpath2 starting at any node matching jsonpath1 |
| jsonpath [ whatever ] | Same as jsonpath.whatever |
| jsonpath1 .. jsonpath2 | All nodes matched by jsonpath2 that descend from any node matching jsonpath1 |
| jsonpath1 where jsonpath2 | Any nodes matching jsonpath1 with a child matching jsonpath2 |
| jsonpath1 | jsonpath2 | Any nodes matching the union of jsonpath1 and jsonpath2 |
Parsing a Simple JSON Data using JSONPath
A Simple example of parsing the JSON and fetching the JSON value using the attribute key.
# Program to parse JSON data in Python
import json
from jsonpath_ng import jsonpath, parse
employee_data = '{"id":1, "first_name":"Chandler" , "last_name":"Bing"}'
json_data = json.loads(employee_data)
jsonpath_expr= parse('$.first_name')
first_name = jsonpath_expr.find(json_data)
print("The First Name of the employee is: ", first_name[0].value)
Output
The First Name of the employee is Chandler
Parsing a Json Array using JSONPath Expression
The JSON key contains the list of values and uses the JSON Path expression. We can parse and query the exact field values of the JSON.
{
"books": [
{
"category": "reference",
"author": "Nigel Rees",
"title": "Sayings of the Century",
"isbn": "6-246-2356-8",
"price": 8.95
},
{
"category": "fiction",
"author": "Evelyn Waugh",
"title": "Sword of Honour",
"isbn": "5-444-34234-8",
"price": 12.99
},
{
"category": "fiction",
"author": "Herman Melville",
"title": "Moby Dick",
"isbn": "0-553-21311-3",
"price": 8.99
},
{
"category": "fiction",
"author": "J. R. R. Tolkien",
"title": "The Lord of the Rings",
"isbn": "0-395-19395-8",
"price": 22.99
}
]
}
In the above JSON data, if we need the list of all ISBN of the book, we can use the below code to get the data using JSONPath expression as shown below.
# Program to parse JSON data in Python
import json
from jsonpath_ng import jsonpath, parse
with open("books.json", 'r') as json_file:
json_data = json.load(json_file)
jsonpath_expression = parse('books[*].isbn')
for match in jsonpath_expression.find(json_data):
print(f'Books ISBN: {match.value}')
Output
Books ISBN: 6-246-2356-8
Books ISBN: 5-444-34234-8
Books ISBN: 0-553-21311-3
Books ISBN: 0-395-19395-8
The post Python JSONPath appeared first on ItsMyCode.
Codementor
Django Website Template - Material Kit Design
Open-source Django Website template crafted on top of a pixel-perfect Bootstrap 5 design: Material Kit (free version).
Weekly Python StackOverflow Report
(ccxcix) stackoverflow python report
These are the ten most rated questions at Stack Overflow last week.
Between brackets: [question score / answers count]
Build date: 2021-10-30 13:46:37 GMT
- NumPy: construct squares along diagonal of matrix / expand diagonal matrix - [15/3]
- Convert subset of columns to rows by combining columns - [14/3]
- Efficient algorithm to get all the combinations of numbers that are within a certain range from 2 lists in python - [8/2]
- Is the key order the same for OrderedDict and dict? - [6/3]
- Django REST API accept list instead of dictionary in post request - [6/2]
- cannot update spyder=5.1.5 on new anaconda install - [6/1]
- Why does starred assignment produce lists and not tuples? - [6/1]
- Is there a way to match inequalities in Python ≥ 3.10? - [5/1]
- Efficient way of using numpy memmap when training neural network with pytorch - [5/0]
- How to find which column contains a certain value? - [4/4]
Sebastian Pölsterl
scikit-survival 0.16 released
I am proud to announce the release if version 0.16.0 of scikit-survival,
The biggest improvement in this release is that you can now
change the evaluation metric that is used in estimators’ score method.
This is particular useful
for hyper-parameter optimization using scikit-learn’s GridSearchCV.
You can now use as_concordance_index_ipcw_scorer,
as_cumulative_dynamic_auc_scorer, or
as_integrated_brier_score_scorer to adjust the
score method to your needs.
The
example below
illustrates how to use these in practice.
For a full list of changes in scikit-survival 0.16.0, please see the release notes.
Installation
Pre-built conda packages are available for Linux, macOS, and Windows via
conda install -c sebp scikit-survival
Alternatively, scikit-survival can be installed from source following these instructions.
Hyper-Parameter Optimization with Alternative Metrics
The code below is also available as a
notebook
and can directly be executed by clicking
In this example, we are going to use the
German Breast Cancer Study Group 2
dataset.
We want to fit a
Random Survival Forest
and optimize it’s max_depth hyper-parameter using scikit-learn’s
GridSearchCV.
Let’s begin by loading the data.
import numpy as np
from sksurv.datasets import load_gbsg2
from sksurv.preprocessing import encode_categorical
gbsg_X, gbsg_y = load_gbsg2()
gbsg_X = encode_categorical(gbsg_X)
lower, upper = np.percentile(gbsg_y["time"], [10, 90])
gbsg_times = np.arange(lower, upper + 1)
Next, we create an instance of Random Survival Forest.
from sksurv.ensemble import RandomSurvivalForest
rsf_gbsg = RandomSurvivalForest(random_state=1)
We define that we want to evaluate the performance of each hyper-parameter configuration by 3-fold cross-validation.
from sklearn.model_selection import KFold
cv = KFold(n_splits=3, shuffle=True, random_state=1)
Next, we define the set of hyper-parameters to evaluate. Here, we search for the best value for max_depth between 1 and 10 (excluding). Note that we have to prefix max_depth with estimator__, because we are going to wrap the actual RandomSurvivalForest instance with one of the classes above.
cv_param_grid = {
"estimator__max_depth": np.arange(1, 10, dtype=int),
}
Now, we can put all the pieces together and start searching for the best hyper-parameters that maximize concordance_index_ipcw.
from sklearn.model_selection import GridSearchCV
from sksurv.metrics import as_concordance_index_ipcw_scorer
gcv_cindex = GridSearchCV(
as_concordance_index_ipcw_scorer(rsf_gbsg, tau=gbsg_times[-1]),
param_grid=cv_param_grid,
cv=cv,
).fit(gbsg_X, gbsg_y)
The same process applies when optimizing hyper-parameters to maximize cumulative_dynamic_auc.
from sksurv.metrics import as_cumulative_dynamic_auc_scorer
gcv_iauc = GridSearchCV(
as_cumulative_dynamic_auc_scorer(rsf_gbsg, times=gbsg_times),
param_grid=cv_param_grid,
cv=cv,
).fit(gbsg_X, gbsg_y)
While as_concordance_index_ipcw_scorer
and as_cumulative_dynamic_auc_scorer
can be used with any estimator,
as_integrated_brier_score_scorer is only available for estimators that provide the predict_survival_function method, which includes RandomSurvivalForest. If available, hyper-parameters that maximize the negative intergrated time-dependent Brier score will be selected, because a lower Brier score indicates better performance.
from sksurv.metrics import as_integrated_brier_score_scorer
gcv_ibs = GridSearchCV(
as_integrated_brier_score_scorer(rsf_gbsg, times=gbsg_times),
param_grid=cv_param_grid,
cv=cv,
).fit(gbsg_X, gbsg_y)
Finally, we can visualize the results of the grid search and compare the best performing hyper-parameter configurations (marked with a red dot).
import matplotlib.pyplot as plt
def plot_grid_search_results(gcv, ax, name):
ax.errorbar(
x=gcv.cv_results_["param_estimator__max_depth"].filled(),
y=gcv.cv_results_["mean_test_score"],
yerr=gcv.cv_results_["std_test_score"],
)
ax.plot(
gcv.best_params_["estimator__max_depth"],
gcv.best_score_,
'ro',
)
ax.set_ylabel(name)
ax.yaxis.grid(True)
_, axs = plt.subplots(3, 1, figsize=(6, 6), sharex=True)
axs[-1].set_xlabel("max_depth")
plot_grid_search_results(gcv_cindex, axs[0], "c-index")
plot_grid_search_results(gcv_iauc, axs[1], "iAUC")
plot_grid_search_results(gcv_ibs, axs[2], "$-$IBS")
Results of hyper-parameter optimization.
When optimizing for the concordance index, a high maximum depth works best, whereas the other metrics are best when choosing a maximum depth of 5 and 6, respectively.
ItsMyCode
nxnxn matrix python
ItsMyCode |
In this tutorial, we will take a look at how to create the nxnxn matrix in Python.
What is NxNxN?
The term NxNxN (pronounced as N by N by N) is also called as NxNxN cube or NxNxN puzzle. It represents the cube with the same dimensions that means the cube will have the same height, width, and length.
The NxNxN puzzles that fit under this category include the 2x2x2 cube, the Rubik’s cube, the 4x4x4 cube, the 5x5x5 cube, etc. The 1x1x1 cube also belongs in this category, even if it is not a twisty puzzle because it does complete the NxNxN set.
How to Create NxNxN Matrix in Python?
Now that we know what is nxnxn, lets us learn how to create the nxnxn matrix in Python using different ways with examples.
Create NxN Matrix in Python with Non Duplicating numbers
The below code is to create an nxn matrix in Python, and it does not repeat the numbers row-wise and column-wise. These are mainly used in Puzzles like Sudoko.
# Python Program to create nxn Matrix
import numpy as np
# Provide the value of N
N = 5
# returns evenly spaced values
row = np.arange(N)
# create an new array filled with zeros of given shape and type
result = np.zeros((N, N))
# Logic to roll array elements of given axis
for i in row:
result[i] = np.roll(row, i)
print(result)
Output
[[0. 1. 2. 3. 4.]
[4. 0. 1. 2. 3.]
[3. 4. 0. 1. 2.]
[2. 3. 4. 0. 1.]
[1. 2. 3. 4. 0.]]
Create NxNxN matrix in Python using numpy
The below code is to create an nxnxn matrix in Python. Just change the value of N based on the requirement and the shape that you need to generate. For a standard Rubik’s cube, it would be 3x3x3, so the value of n would be 3.
Example:
# Python program to create nxnxn matrix
import numpy as np
# Provide the value of nxnxn
n = 3
a = np.arange(n)
b = np.array([a]*n)
matrix = np.array([b]*n)
#creating an array containg n-dimensional points
flat_mat = matrix.reshape((int(matrix.size/n),n))
#just a random matrix we will use as a rotation
rotate = np.eye(n) + 2
#apply the rotation on each n-dimensional point
result = np.array([rotate.dot(x) for x in flat_mat])
#return to original shape
result=result.reshape((n,n,n))
print(result)
Output
[[[6. 7. 8.]
[6. 7. 8.]
[6. 7. 8.]]
[[6. 7. 8.]
[6. 7. 8.]
[6. 7. 8.]]
[[6. 7. 8.]
[6. 7. 8.]
[6. 7. 8.]]]
The post nxnxn matrix python appeared first on ItsMyCode.
October 29, 2021
Codementor
How to Receive a Phone Call in Python with Flask and Plivo
Start writing Making an outbound phone call (https://www.plivo.com/blog/make-phone-calls-in-python/?utmsource=codementor&utmmedium=external-blog&utmcampaign=receive-call-flask&utmcontent=VOICE)...
death and gravity
reader 2.5 released
Hi there!
I'm happy to announce version 2.5 of reader, a Python feed reader library.
What's new? #
Here are the most important changes since reader 2.0.
Search enabled by default #
Full-text search works out of the box: no extra dependencies, no setup needed.
Statistics #
There are now statistics on feed and user activity, to give you a better understanding of how you consume content.
First, you can get the average number of entries per day
for the last 1, 3, 12 months,
so you know how often a feed publishes new entries,
and how that changed over time â
think sparklines: 36 entries âââ (4.0, 2.0, 0.6).
Second, reader records the time when an entry was last marked as read or important. This will allow you to see how you engage with new entries â I'm still working on how to translate this data into a useful summary.
A nice side-effect of knowing when entry flags changed is that now it's possible to tell if an entry was explicitly marked as unimportant (new entries are also unimportant).
Improved duplicate handling #
Duplicate handling got significantly better:
- False negatives are reduced by using approximate string matching and heuristics to detect truncated content.
- You can trigger entry deduplication manually,
for the existing entries of a feed
â just add the
.reader.dedupe.oncetag to the feed, and wait for the next update. Also, you can deduplicate entries by title alone, ignoring content. - Old duplicates are deleted instead of marked as read/unimportant.
User-added entries #
You can now add entries to existing feeds. This is useful when you want to keep track of an article that is not in the feed anymore because it "fell off the end".
It can also be used to build bookmarking / read later functionality similar to that of Tiny Tiny RSS; extracting content from arbitrary pages would be pretty helpful here.
New Python versions #
reader now supports Python 3.10 and PyPy 3.8.
Other changes #
Aside from the changes mentioned above, I added a new plugin hook, added a few convenience methods and attributes, updated the web application and plugins to take advantage of the new features, and fixed a few minor bugs.
See the changelog for details.
What is reader? #
reader takes care of the core functionality required by a feed reader, so you can focus on what makes yours different.
reader allows you to:
- retrieve, store, and manage Atom, RSS, and JSON feeds
- mark entries as read or important
- add tags and metadata to feeds
- filter feeds and articles
- full-text search articles
- get statistics on feed and user activity
- write plugins to extend its functionality
...all these with:
- a stable, clearly documented API
- excellent test coverage
- fully typed Python
To find out more, check out the GitHub repo and the docs, or give the tutorial a try.
Why use a feed reader library? #
Have you been unhappy with existing feed readers and wanted to make your own, but:
- never knew where to start?
- it seemed like too much work?
- you don't like writing backend code?
Are you already working with feedparser, but:
- want an easier way to store, filter, sort and search feeds and entries?
- want to get back type-annotated objects instead of dicts?
- want to restrict or deny file-system access?
- want to change the way feeds are retrieved by using Requests?
- want to also support JSON Feed?
... while still supporting all the feed types feedparser does?
If you answered yes to any of the above, reader can help.
Why make your own feed reader? #
So you can:
- have full control over your data
- control what features it has or doesn't have
- decide how much you pay for it
- make sure it doesn't get closed while you're still using it
- really, it's easier than you think
Obviously, this may not be your cup of tea, but if it is, reader can help.
Real Python
The Real Python Podcast â Episode #84: Creating and Manipulating PDFs in Python With borb
Have you wanted to generate PDFs from your Python project? Many of the current libraries require designing the document down at the pixel level. Would you be interested in a tool that lets you specify the page layout while it handles the specific details of laying out the text? This week on the show, we talk with Joris Schellekens about his library for creating and manipulating PDFs named borb.
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Kushal Das
Continuing the journey at SUNET

From this week I started working for SUNET as a public interest technologist. We are under VetenskapsrÄdet, this also means from now on I am a central government employee in Sweden.
I will be helping out various in open source projects and services provided SUNET, focusing on privacy and security. I will also continue working on all of the upstream projects I maintain, including SecureDrop.
Python Bytes
#256 And the best open source project prize goes to ...
<p><strong>Watch the live stream:</strong></p> <a href='https://www.youtube.com/watch?v=ZrcwAIix9UA' style='font-weight: bold;'>Watch on YouTube</a><br> <br> <p><strong>About the show</strong></p> <p>Sponsored by <strong>Shortcut - Get started at</strong> <a href="http://shortcut.com/pythonbytes"><strong>shortcut.com/pythonbytes</strong></a></p> <p>Special guest: <strong>The Anthony Shaw</strong></p> <p><strong>Michael #0: Itâs episode 2^8 (nearly 5 years of podcasting)</strong></p> <p><strong>Brian #1:</strong> <a href="https://lukasz.langa.pl/f15a8851-af26-4e94-a4b1-c146c57c9d20/"><strong>Where does all the effort go?</strong></a><a href="https://lukasz.langa.pl/f15a8851-af26-4e94-a4b1-c146c57c9d20/"><strong>:</strong></a> <a href="https://lukasz.langa.pl/f15a8851-af26-4e94-a4b1-c146c57c9d20/"><strong>Looking at Python core developer activity</strong></a></p> <ul> <li>Ćukasz Langa</li> <li>A look into CPython repository history and PR data</li> <li>Also, nice example of datasette in action and lots of SQL queries. </li> <li>The data, as well as the process, is open for anyone to look at.</li> <li>Cool that the process was listed in the article, including helper scripts used.</li> <li>Timeframe for data is since Feb 10, 2017, when source moved to GitHub, through Oct 9, 2021. <ul> <li>However, some queries in the article are tighter than that.</li> </ul></li> <li>Queries <ul> <li>Files involved in PRs since 1/1/20 <ul> <li>top is ceval.c with 259 merged PRs</li> </ul></li> <li>Contributors by number of merged PRs <ul> <li>lots of familiar names in the top 50, along with some bots</li> <li>itâd be fun to talk with someone about the bots used to help the Python project</li> <li>nice note: âClearly, it pays to be a bot ⊠or a release manager since this naturally causes you to make a lot of commits. But Victor Stinner and Serhiy Storchaka are neither of these things and still generate amazing amounts of activity. Kudos! In any case, this is no competition but it was still interesting to see who makes all these recent changes.â</li> </ul></li> <li>Who contributed where? <ul> <li>Neat. Thereâs a self reported <a href="https://devguide.python.org/experts/">Experts Index</a> in the very nice <a href="https://devguide.python.org/">Python Developerâs Guide</a>. But some libraries donât have anyone listed. The data does though. </li> <li>Ćukasz generated a <a href="https://lukasz.langa.pl/f15a8851-af26-4e94-a4b1-c146c57c9d20/assets/all_experts.txt">top-5 list</a> for each file. Contributing to some file and have a question. These folks may be able to help.</li> </ul></li> <li>Averages for PR activity <ul> <li>core developer authoring and merging their own PR takes on average <strong>~7</strong> days (std dev <strong>±41.96</strong> days);</li> <li>core developer authoring a PR which was merged by somebody else takes on average <strong>20.12</strong> days (std dev <strong>±77.36</strong> days);</li> <li>community member-authored PRs get merged on average after <strong>19.51</strong> days (std dev <strong>±81.74</strong> days).</li> <li>Interesting note on those std deviations: âWell, if we were a company selling code review services, this standard deviation value would be an alarmingly large result. But in our situation which is almost entirely volunteer-driven, the goal of my analysis is to just observe and record data. The large standard deviation reflects the large amount of variation but isnât necessarily something to worry about. We could do better with more funding but fundamentally our biggest priority is keeping CPython stable. Certain care with integrating changes is required. Erring on the side of caution seems like a wise thing to do.â</li> </ul></li> </ul></li> <li>More questions to be asked, especially from the issue tracker <ul> <li>Which libraries require most maintenance?</li> </ul></li> </ul> <p><strong>Michael #2:</strong> <a href="https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html"><strong>Why you shouldn't invoke setup.py directly</strong></a></p> <ul> <li>By <a href="https://blog.ganssle.io/author/paul-ganssle.html">Paul Ganssle</a> (from <a href="https://talkpython.fm/episodes/show/271/unlock-the-mysteries-of-time-pythons-datetime-that-is"><strong>Talk Python #271: Unlock the mysteries of time, Python's datetime that is!</strong></a>)</li> <li>In response to conversation in <a href="https://talkpython.fm/episodes/show/338/using-cibuildwheel-to-manage-the-scikit-hep-packages"><strong>Talk Pythonâs cibuildwheel episode</strong></a>?</li> <li>For a long time, <a href="https://github.com/pypa/setuptools">setuptools</a> and distutils were the only game in town when it came to creating Python packages</li> <li>You write a setup.py file that invokes the setup() method, you get a Makefile-like interface exposed by invoking python setup.py [HTML_REMOVED]</li> <li>The last few years <strong>all direct invocations of setup.py are effectively deprecated</strong> in favor of invocations via purpose-built and/or standards-based CLI tools like <a href="https://pip.pypa.io/en/stable/">pip</a>, <a href="https://pypa-build.readthedocs.io/en/stable/">build</a> and <a href="https://tox.wiki/en/latest/">tox</a>.</li> <li>In Python 2.0, the distutils module was introduced as a standard way to convert Python source code into *nix distro packages</li> <li>One major problem with this approach, though, is that every Python package <em>must</em> use distutils and <em>only</em> distutils â there was no standard way for a package author to make it clear that you need <em>other</em> packages in order to build or test your package. => Setuptools</li> <li>Works, but sometimes you need requirements before the install (see cython example)</li> <li>A <strong>build backend</strong> is something like setuptools or <a href="https://flit.readthedocs.io/en/latest/">flit</a>, which is a library that knows how to take a source tree and turn it into a distributable artifact â a source distribution or a wheel.</li> <li>A <strong>build frontend</strong> is something like pip or <a href="https://pypa-build.readthedocs.io/en/stable/">build</a>, which is a program (usually a CLI tool) that orchestrates the build environment and invokes the build backend</li> <li>In this taxonomy, setuptools has historically been <em>both</em> a backend <em>and</em> a frontend - that said, setuptools is a <em>terrible</em> frontend. It does not implement PEP 517 or PEP 518's requirements for build frontends</li> <li>Why am I not seeing deprecation warnings?</li> <li>Use <a href="https://pypa-build.readthedocs.io/en/latest/"><strong>build package</strong></a>.</li> <li>Also can be replaced by <a href="https://tox.wiki/en/latest/">tox</a>, <a href="https://nox.thea.codes/en/stable/index.html">nox</a> or even a Makefile</li> <li>Probably should just check out <a href="https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html#summary"><strong>the summary table</strong></a>.</li> </ul> <p><strong>Anthony #3:</strong> <a href="https://opentelemetry.io"><strong>OpenTelemetry is going stable soon</strong></a></p> <ul> <li>Cloud Native Computing Foundation project for cross-language event tracing, performance tracing, logging and sampling for distributed applications.</li> <li>Engineers from Microsoft, Amazon, Splunk, Google, Elastic, New Relic <a href="https://opentelemetry.io/vendors/">and others</a> working on standards and specification.</li> <li>Formed through a merger of the OpenTracing and OpenCensus projects.</li> <li>Python SDK supports instrumentation of <a href="https://opentelemetry.io/registry/">lots of frameworks</a>, like Flask, Django, FastAPI (ASGI), and ORMs like SQLalchemy, or templating engines.</li> <li>All data can then be exported onto various platforms : NewRelic, Prometheus, Jaeger, DataDog, Azure Monitor, Google Cloud Monitoring.</li> </ul> <p>If you want to get started and play around, checkout the rich console exporter I submitted recently.</p> <p><strong>Brian #4:</strong> <a href="https://sadh.life/post/builtins/"><strong>Understanding all of Python, through its builtins</strong></a></p> <ul> <li>Tushar Sadhwani</li> <li>I really enjoyed the discussion before he actually got to the builtins. <ul> <li>LEGB rule defines the order of scopes in which variables are looked up in Python. <ul> <li>Local, Enclosing (nonlocal), Global, Builtin</li> </ul></li> <li>Understanding LEGB is a good thing to do for Python beginners or advanced beginners. Takes a lot of the mystery away.</li> <li>Also that all the builtins are in one </li> </ul></li> <li>The rest is a quick scan through the entire list. <ul> <li>Itâs not detailed everywhere, but pulls over scenic viewpoints at regular intervals to discuss interesting parts of <code>builtins</code>.</li> <li>Grouped reasonably. Not alphabetical</li> </ul></li> <li>Constants: Thereâs exactly 5 constants: <code>True</code>, <code>False</code>, <code>None</code>, <code>Ellipsis</code>, and <code>NotImplemented</code>.</li> <li>globals and locals: Where everything is stored</li> <li>bytearray and memoryview: Better byte interfaces</li> <li>bin, hex, oct, ord, chr and ascii: Basic conversions</li> <li>âŠ</li> <li>Well, itâs a really long article, so I suggest jumping around and reading a section or two, or three. Luckily thereâs a nice TOC at the top.</li> </ul> <p><strong>Michael #5:</strong> <a href="https://www.infoworld.com/article/3637038/the-best-open-source-software-of-2021.html#slide5"><strong>FastAPI, Dask, and more Python goodies win best open source titles</strong></a></p> <ul> <li>Things that stood out to me</li> <li>FastAPI</li> <li>Dask</li> <li>Windows Terminal</li> <li>minikube - Kubernetes cluster on your PC</li> <li>OBS Studio</li> </ul> <p><strong>Anthony #6:</strong> <a href="https://lukasz.langa.pl/5d044f91-49c1-4170-aed1-62b6763e6ad0/"><strong>Notes From the Meeting On Python GIL Removal Between Python Core and Sam Gross</strong></a></p> <ul> <li>Following on from last weekâs share on the ânogilâ branch by Sam Gross, the Core Dev sprint included an interview.</li> <li>Targeted to 3.9 (alpha 3!), needs to at least be updated to 3.9.7. </li> </ul> <p>Nogil:</p> <ul> <li>Replaces pymalloc with mimalloc for thread safety</li> <li>Ties objects to the thread that created them witha. non-atomic local reference count within the owner thread</li> <li>Allows for (slower) reference counting from other threads.</li> <li>Immortalized some objects so that references never get inc/decâed like True, False, None, etc.</li> <li>Deferred reference counting</li> <li>Adjusts the GC to wait for all threads to pause at a safe point, doesnât wait for I/O blocked threads and constructs a list of objects to deallocate using mimalloc</li> <li>Relocates the MRO to a thread local (instead of process-local) to avoid contention on ref counting</li> <li>Modifies the builtin collections to be thread-safe (lists, dictionaries, etc,) since they could be shared across threads.</li> </ul> <p>IMHO, biggest thing to happen to Python in 5 years. Encouragingly, Sam was invited to be a Core Dev and Lukasz will mentor him!</p> <p><strong>Extras</strong></p> <p>Michael</p> <ul> <li><a href="https://twitter.com/ThePSF/status/1450168556801380357"><strong>Python Developers Survey 2021</strong></a> is open</li> <li><a href="https://twitter.com/HenrySchreiner3/status/1451210681827659781"><strong>More PyPI CLI updates</strong></a></li> <li><a href="https://github.com/c4urself/bump2version/"><strong>bump2version</strong></a> via Bahram Aghaei (youtube comment)</li> <li>Was there <a href="http://mellifera.cc/wp-content/uploads/2008/10/mic-insertion2.jpg"><strong>a bee stuck in Brianâs mic</strong></a> last time?</li> </ul> <p>Brian</p> <ul> <li><a href="https://us.pycon.org/2022/speaking/speaking/"><strong>PyCon US 2022 CFP is open until Dec 20</strong></a> </li> <li><a href="https://pragprog.com/titles/bopytest2/python-testing-with-pytest-second-edition/"><strong>Python Testing with pytest, 2nd edition, Beta 7.0</strong></a> <ul> <li>All chapters now there. (Final chapter was âAdvanced Parametrizationâ)</li> <li>Itâs in technical review phase now. </li> <li>If reading, please skip ahead to the chapter you really care about and submit errata if you find anything confusing.</li> </ul></li> </ul> <p><strong>Joke:</strong></p> <p><img src="https://paper-attachments.dropbox.com/s_72552CC2D0BCB4B5301750F3A35BC5D00B37A967D1C0E0905E8082299B754EC6_1635372932916_IMG_2873.JPG" alt="" /></p>
Python Anywhere
Understanding multiple web workers and multiple users of your website
Over the years, we’ve found that one regular source of confusion for people who are just getting started with web development is how to handle what we call “global state”. We’ve written a help page explaining how to solve problems like this and wanted to expand on it here.
October 28, 2021
Gaël Varoquaux
Hiring an engineer and post-doc to simplify data science on dirty data
Note
Join us to work on reinventing data-science practices and tools to produce robust analysis with less data curation.
It is well known that data cleaning and preparation are a heavy burden to the data scientist.
Dirty data research
In the dirty data project, we have been conducting machine-learning research to see how better statistical models could readily ingest non-curated data, and reduce the need of data preparation for data science. We now have a growing understanding of the problems, theoretical and practical, which lie across statistical and database topics.
Machine learning leads to different tradeoffs than traditional inferential statistics (because it can rely on more powerful model). For instance, we now have a good understanding of the case of missing values: in Le Morvan et al, we showed that with traditional methods, ignorable missingness [1] and âgoodâ imputation are important, but it turns out for prediction, flexible predictors are what matters and they can work on any missingness mechanism.
| [1] | âMissing at Randomâ, where missingness is independent of the hidden values |
Similarly, we have made good progress on tolerating normalization errors and typos. We find that rather to attempt to deduplicate the entries or fix the typos, it is best to represent similarities and ambiguities to a flexible learning algorithm. The simplest and most reliable methods are implemented in the dirty-cat library, to facilitate the life of data-scientists
Reinventing data science
With this understanding (and even more exciting on-going research), we want to revisit data science. Machine-learning can provide flexible models for many usages of data science. Our goal is to use it to help assembling and analyzing datasets while minimizing human efforts. For this, we need tools that can answer typical data-science questions using machine learning and starting from the raw data, often spread in multiple files or multiple tables of a databases. Building these tools requires data-science research, a new vision of data-science APIs, and careful software crafting.
Join us in this adventure
We have an awesome team, with a great mix of people of different seniority, different expertise (statistics, machine learning, databases, software engineering), sharing offices with the scikit-learn at Inria. But we have too many exciting ideas, so we are growing this team.
A data-science engineer: new software with new ideas
We are looking for someone with a background in data science or numerical Python programming to join us, to help with designing a new data-science library, evolving from dirty-cat, and to help with data-science experimentation for the research.
We like people who care about data, designing good tools, and have vision about data science. We are happy to consider different level of experience. Apply on the job offer.
A post-doc researcher: science joining data engineering to deep learning
We will soon be announcing a post-doc position to join the team for research in this scope. We are interested in questions around learning on relational or tabular data, or learning data integration. We have plenty of ideas to explore around embeddings in databases, learning to aggregate, learning on sets, graph neural networks for databases, or distributional matching for entity and schema alignment. We expect to be borrowing tools (conceptual and practical) from deep learning, but to blending them with techniques from data integration, knowledge graphs, and databases.
The job posting will be out soon, but I am running out of the office right now for vacations (work-life balance also matters to us).
Diversity is important
Our team is not as diverse as I would like it to be (though probably doing better than typical computer-science team). We love diverse candidates. Do not hesitate.
PyCharm
Early Access PyCharm Podcast: Python and Docker

Docker has become an integral part of the developer ecosystem in the past few years. Most clouds now support the ability to how a dockerized application with ease. However, this was not always the case. There was a time when docker was in its infancy roughly about five years ago.
This is why, we talked to Michael Golubev, who is the creator of the Docker plugin for IntelliJ-based IDEs to gain an understanding of what the landscape was like at the time, and what drove him to create the plugin in the first place.
In this episode, we talk to Michel about a whole host of things from how he got into developing the plugin, and the milestones that docker transitioned through, and as a result the challenges that he faced and continues to face. Check it out!
IslandT
Python Calculator application update
After an hour of hard work today, I have managed to get a few calculator buttons to function, but I believe there might have been bugs hiding in places where I could not yet discover. Below are those buttons of the calculator that I have included the functions that will perform various calculations!
The just include calculate module is used to do the arithmetic operations such as addition, subtraction, division, multiplication, and other operations on numbers!
Those buttons within the blue square are ready to useYou people can find the entire calculator project files (main.py and calculate.py as well as other files in the future) under this project link. This project has not yet been completed, more works need to do that include looking for the bugs in the program as well as coding the remaining functions of the buttons. If you want to help me out then do correct my programming mistake and ask for a pull request on Github after you have cloned the files and work on your own!
I will now continue to work on the remaining buttons’ functions and update my work from time to time under the same project repository, do follow me on Github to receive the update of this python project and other python projects as well!
October 27, 2021
Python for Beginners
Frozenset in Python
While programming in python, you might have used sets, lists and dictionaries in your programs. In this article, we will study about another container object called frozenset in Python. We will discuss how to create a frozenset and how we can access its elements.
What is a frozenset in Python?
You might have worked with sets in python. Frozensets are container objects that have all the properties of a set but are immutable. A frozenset is related to a set in a similar way as a tuple is related to a list. The major properties of a frozenset are as follows.
- Frozensets contain unique elements.
- They are immutable and no elements can be added, modified or deleted from a frozenset.
- We can add elements to a frozenset only during creation of a frozenset object.
Let us now discuss how we can create a frozenset and access its elements.
How to create a frozenset in Python?
We can create a frozenset using the frozenset() constructor. The frozenset() constructor takes a container object as input and creates a frozenset with the elements of the container object. For example, We can create a frozenset with the elements of a list as follows.
myList = [1, 2, 3, 4, 5]
print("The given list is:")
print(myList)
myFrozenset = frozenset(myList)
print("The output frozenset is:")
print(myFrozenset)
Output:
The given list is:
[1, 2, 3, 4, 5]
The output frozenset is:
frozenset({1, 2, 3, 4, 5})
Similarly, we can create a frozenset using the elements of a set as follows.
mySet = {1, 2, 3, 4, 5}
print("The given set is:")
print(mySet)
myFrozenset = frozenset(mySet)
print("The output frozenset is:")
print(myFrozenset)
Output:
The given set is:
{1, 2, 3, 4, 5}
The output frozenset is:
frozenset({1, 2, 3, 4, 5})
When no input is given to the frozenset() constructor, it creates an empty frozenset.
myFrozenset = frozenset()
print("The output frozenset is:")
print(myFrozenset)
Output:
The output frozenset is:
frozenset()
When we pass a python dictionary as an input to the frozenset() constructor, it creates a frozenset of the keys of the dictionary. This can be observed in the following example.
myDict = {1:1, 2:4, 3:9, 4:16, 5:25}
print("The given dictionary is:")
print(myDict)
myFrozenset = frozenset(myDict)
print("The output frozenset is:")
print(myFrozenset)
Output:
The given dictionary is:
{1: 1, 2: 4, 3: 9, 4: 16, 5: 25}
The output frozenset is:
frozenset({1, 2, 3, 4, 5})
Access elements from a frozenset
Similar to other container objects, we can access the elements of a frozenset using an iterator as follows.
mySet = {1, 2, 3, 4, 5}
print("The given set is:")
print(mySet)
myFrozenset = frozenset(mySet)
print("The elements of the frozenset are:")
iterator=iter(myFrozenset)
for i in iterator:
print(i)
Output:
The given set is:
{1, 2, 3, 4, 5}
The elements of the frozenset are:
1
2
3
4
5
We can also traverse the elements of the frozenset using a for loop as follows.
mySet = {1, 2, 3, 4, 5}
print("The given set is:")
print(mySet)
myFrozenset = frozenset(mySet)
print("The elements of the frozenset are:")
for i in myFrozenset:
print(i)
Output:
The given set is:
{1, 2, 3, 4, 5}
The elements of the frozenset are:
1
2
3
4
5
Add elements to a frozenset
We cannot add elements to a frozenset as they are immutable. Similarly, we cannot modify or delete elements from a frozenset.
Difference between set and frozenset in Python
A frozenset in python can be considered as an immutable set. The main difference between a set and a frozenset is that we cannot modify elements in a frozenset. Other properties of sets and frozensets are almost identical.
Conclusion
In this article, we have discussed how to create frozenset in python and what are its properties. To learn more about python programming, you can read this article on list comprehension. You may also like this article on the linked list in Python.
The post Frozenset in Python appeared first on PythonForBeginners.com.
Shannon -jj Behrens
Python: PyWeek 32: Lil Miss Vampire
TL;DR A world that scrolls infinitely in any direction, an RPG-like UI, and simple, real-time fighting.
My younger kids and I built this entry for PyWeek 32 based on the theme "Neverending".
The key innovations are:
- It has a neverending world. As the player walks along, it picks up tiles and places new ones invisibly. It uses an LRUDict to remember the last million tiles you've seen. This matches real life in that if you go back to a place after 20 years, it'll look different than when you first saw it.
- The user interface was inspired by Super Mario RPG, but the fighting mechanics are purposely realtime. It's a lot like if you were playing Street Fighter, but all you were allowed to do was use a fast punch, a slow punch, or block. It's a little bit like roshambo.
The code:
- The code is pretty pleasant. I made use of lots of new features in the latest Python, and I built a pretty decent developer experience.
- It's built on the excellent arcade library which has exceptionally good documentation, tutorials, and examples.
- I used type annotations everywhere, and I enforced them via mypy. I made extensive use of `typing.NamedTuple` which gives it a nice, immutable, well-typed flavor.
- I used black to format the code during check-in.
- There are extensive unit tests for the models. And there are git hooks to keep everything sane.
- Running `make iterate` will reformat the code, run mypy to enforce types, run the unit tests, and then launch the game.
Here's the GitHub page with more details.
Real Python
Django Templates: Implementing Custom Tags and Filters
Django templates help you manage your web applicationâs HTML. Templates use a mini-language with variables, tags, and filters. You can conditionally include blocks, create loops, and modify variables before theyâre shown. Django comes with many built-in tags and filters, but what if theyâre not enough? In that case, write your own! This tutorial covers the ins and outs of writing your own Django template custom tags and filters.
In this tutorial, youâll learn how to:
- Write and register a function as a custom filter
- Understand how autoescaping works in custom tags and filters
- Use
@simple_tagto write a custom template tag - Use
@inclusion_tagto render a tag based on a subtemplate - Write a complex template tag with a parser and renderer
By the end of the tutorial, youâll be able to write custom filters to modify data in your templates and custom tags that give you access to the full power of Python within your templates.
Free Bonus: Click here to get access to a free Django Learning Resources Guide (PDF) that shows you tips and tricks as well as common pitfalls to avoid when building Python + Django web applications.
Getting Started
To play around with your own Django template custom tags and filters, youâre going to need a Django project. Youâll build dinosoar, a small website with all sorts of dinosaur info. Although the name implies that youâll only include flying dinos, thatâs just for marketing spin. All your favorite heavyweights will be there as well.
If youâve never set up a Django project before or if you need a refresher, you may want to read Get Started With Django Part 1: Build a Portfolio App first.
Django is a third-party library, so it should be installed in a virtual environment. If youâre new to virtual environments, check out Python Virtual Environments: A Primer. Create and activate a new virtual environment for yourself and then run the following commands:
1$ python -m pip install django==3.2.5
2$ django-admin startproject dinosoar
3$ cd dinosoar
4$ python manage.py startapp dinofacts
5$ python manage.py migrate
These commands perform the following actions:
- Line 1 runs the
pipcommand to install Django. - Line 2 creates your new Django project.
- Line 3 changes the current working directory to the
dinosoarproject. - Line 4 uses the
manage.pycommand to create a Django app calleddinofacts, where your main view will live. - Line 5 migrates any database changes. Even if you arenât creating models, this line is necessary because the Django admin is active by default.
With the project created, itâs time to make some configuration changes and write a quick view to help you test your custom tags and filters.
Setting Up a Django Project
You need to make some changes to your projectâs settings to make Django aware of your newly created app and to configure your templates. Edit dinosoar/dinosoar/settings.py and add dinofacts to the INSTALLED_APPS list:
34# dinosoar/dinosoar/settings.py
35
36INSTALLED_APPS = [
37 "django.contrib.admin",
38 "django.contrib.auth",
39 "django.contrib.contenttypes",
40 "django.contrib.sessions",
41 "django.contrib.messages",
42 "django.contrib.staticfiles",
43 "dinofacts",
44]
Within the same file, youâll need to update the DIR value in the TEMPLATES attribute. This tells Django where to look for your template files:
57# dinosoar/dinosoar/settings.py
58
59TEMPLATES = [
60 {
61 "BACKEND": "django.template.backends.django.DjangoTemplates",
62 "DIRS": [
63 BASE_DIR / "templates",
64 ],
65 "APP_DIRS": True,
66 "OPTIONS": {
67 "context_processors": [
68 "django.template.context_processors.debug",
69 "django.template.context_processors.request",
70 "django.contrib.auth.context_processors.auth",
71 "django.contrib.messages.context_processors.messages",
72 ],
73 },
Starting with Django 3.1, the BASE_DIR value that specifies where the project lives is a pathlib object. The change to the DIRS value above tells Django to look in a templates/ subdirectory within your project directory.
Note: If you use Django 3.0 or earlier, youâll set BASE_DIR using the os.path module. In that case, use os.path.join() to specify the path.
With the settings changed, donât forget to create the templates/ directory within your project:
$ pwd
/home/realpython/dinosoar
$ mkdir templates
Itâs time to start writing some code. To test your custom template tags and filters, youâll need a view. Edit dinosoar/dinofacts/views.py as follows:
1# dinosoar/dinofacts/views.py
2
3from datetime import datetime
4from django.shortcuts import render
5
6def show_dino(request, name):
7 data = {
8 "dinosaurs": [
9 "Tyrannosaurus",
10 "Stegosaurus",
11 "Raptor",
12 "Triceratops",
13 ],
14 "now": datetime.now(),
15 }
16
17 return render(request, name + ".html", data)
Lines 7 to 15 create a dictionary with some sample data. Youâll use this in your templates to test your tags and filters. The rest of this view does something a little unorthodox: it takes a parameter that specifies the name of a template.
Read the full article at https://realpython.com/django-template-custom-tags-filters/ »
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
October 26, 2021
PyCoderâs Weekly
Issue #496 (Oct. 26, 2021)
#496 â OCTOBER 26, 2021
View in Browser »
Writing Idiomatic Python
What are the programming idioms unique to Python? This course is a short overview for people coming from other languages and an introduction for beginners to the idiomatic practices within Python. You’ll cover truth values, looping, DRY principles, and the Zen of Python.
REAL PYTHON course
Notes From the Meeting on Python GIL Removal Between Python Core and Sam Gross
“During the annual Python core development sprint we held a meeting with Sam Gross, the author of nogil, a fork of Python 3.9 that removes the GIL. This is a non-linear summary of the meeting.”
ĆUKASZ LANGA
Analyze Code-Level Application Performance Across Your Entire Environment With Datadog APM
Datadog’s distributed tracing and APM generates flame graphs from real requests, enabling you to visualize app performance and pinpoint hard-to-reproduce problems in your production code. Without switching tools, you can pivot to related logs and metrics for full context. Try Datadog APM free â
DATADOG sponsor
PEP 660: Editable Installs for pyproject.toml Based Builds (Wheel Based)
“Now that PEP 517 provides a mechanism to create alternatives to setuptools, and decouple installation front ends from build backends, we need a new mechanism to install packages in editable mode.”
PYTHON.ORG
Django 4.0 Beta 1 Released
Check out the work-in-progress development release notes for more details.
DJANGO SOFTWARE FOUNDATION
PyCon US 2022: Conference Website Launched
PyCon US 2022 takes place in Salt Lake City, Utah from April 27, 2022 to May 5, 2022.
PYCON.ORG
Discussions
What Is Your Most Controversial Python-Related Opinion?
“I like lambdas.”
REDDIT
Python Jobs
Full Stack Software Engineer Django/Postgres/React (Washington D.C.)
Senior Software Engineer (Washington D.C.)
Senior Python Engineer @ Moody's AI & ML Center of Excellence (New York, NY, USA)
Senior Software Engineer (Washington D.C.)
Full Stack Developer (Anywhere)
Software Engineer (Anywhere)
Articles & Tutorials
The Composition Over Inheritance Principle
“In Python as in other programming languages, this grand principle encourages software architects to escape from Object Orientation and enjoy the simpler practices of Object Based programming instead.”
BRANDON RHODES
Using the “not” Boolean Operator in Python
In this step-by-step tutorial, you’ll learn how Python’s “not” operator works and how to use it in your code. You’ll get to know its features and see what kind of programming problems you can solve by using “not” in Python.
REAL PYTHON
How to Quickly Label Data for Machine Learning
With Toloka, you can control the accuracy and speed of data labeling to develop high performing ML models. Our platform supports annotation for image classification, semantic segmentation, object detection, named entity recognition, sentiment analysis, speech recognition, text classification â
TOLOKA AI sponsor
Using the len() Function in Python
In this tutorial, you’ll learn how and when to use the len() Python function. You’ll also learn how to customize your class definitions so that objects of a user-defined class can be used as arguments in len().
REAL PYTHON
PEP 670 [Draft]: Convert Macros to Functions in the Python C API
“Converting macros and static inline functions to regular functions makes these regular functions accessible to projects which use Python but cannot use macros and static inline functions.”
PYTHON.ORG
A NASA TV Still Frame Viewer in Python
Spacestills is a Python program for viewing NASA TV still frames. Itâs a learning project based on the PySimpleGUI GUI framework.
PAOLO AMOROSO
A New, Free Python Code Quality & Security Scanner With Real-Time Scanning
Like Grammarly for your code. Scan your Python code for quality & security issues, and get fix advice in your IDE. Get started with Snyk for free.
SNYK.IO sponsor
Projects & Code
traviscli: Semantically Version Your Python Project on TravisCI
GITHUB.COM/HASII2011 âą Shared by Humberto Sanchez II
staircase: Data Analysis and Manipulation With Mathematical Step Functions
STAIRCASE.DEV âą Shared by Riley Clement
fork-purger: Delete All of Your Forked Repositories on Github
GITHUB.COM/REDNAFI âą Shared by Redowan Delowar
Events
Plone Conference 2021 Online
October 23 to November 1, 2021
PLONECONF.ORG
Weekly Real Python Office Hours Q&A (Virtual)
October 27, 2021
REALPYTHON.COM
PyData Global 2021
October 28 to October 31, 2021
PYDATA.ORG
PythOnRio Meetup
October 30, 2021
PYTHON.ORG.BR
Melbourne Python Users Group
November 1, 2021
J.MP
PyCon Chile
November 5 to November 8, 2021
PYCON.CL
deploy by DigitalOcean
November 16 to November 17, 2021
DIGITALOCEAN
Happy Pythoning!
This was PyCoder’s Weekly Issue #496.
View in Browser »
[ Subscribe to đ PyCoder’s Weekly đ â Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Paolo Amoroso
How to Add Code Syntax Highlighting to Blogger
On my blog I always wanted to format source code in Python and a couple more languages, but couldnât find a convenient way. Until I read a tutorial on adding syntax highlighting to Blogger blogs like mine.
The Blogger post composer actually provides the monospace Courier font that may be used for source code, but it works well only for inline text.
If I apply the Courier font to a block of code, the composer renders each line as a separate paragraph. This leaves too much vertical space that makes the code look ugly. A workaround is to switch to the HTML view in the composer and wrap the block within <pre> ... </pre> tags, which insert the correct line spacing. However, the code doesnât stand out on the page and thereâs room for improving its scannability and visual impact.
Fortunately, Blogger is an old dog I can teach new tricks to, like the setup the tutorial I found presents.
| A Python code snippet with syntax highlighting rendered by highlights.js in a post of the Moonshots Beyond the Cloud blog. |
The setup relies on highlight.js, an open-source JavaScript library for syntax highlighting of source code in a couple hundred languages with dozens of styles. All it takes is adding three lines to the HTML header of the Blogger theme, and wrapping code blocks within <pre><code> ... </code></pre> in the HTML view of the composer.
Iâll explain how I configured my blog for syntax highlighting and how I apply the formatting to code blocks.
Getting highlight.js
There are several ways of using highlight.js and I went with the most straightforward.
The first, onetime step is to edit the blogâs header to add HTML code to fetch the library when a browser loads a page of the blog. I edited the HTML source of the blog theme and inserted these lines toward the end of the <head> section:
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.3.1/styles/default.min.css" integrity="sha512-3xLMEigMNYLDJLAgaGlDSxpGykyb+nQnJBzbkQy2a0gyVKL2ZpNOPIj1rD8IPFaJbwAgId/atho1+LBpWu5DhA==" crossorigin="anonymous" referrerpolicy="no-referrer" />
<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.3.1/highlight.min.js" integrity="sha512-Pbb8o120v5/hN/a6LjF4N4Lxou+xYZ0QcVF8J6TWhBbHmctQWd8O6xTDmHpE/91OjPzCk4JRoiJsexHYg4SotQ==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>
<script>hljs.highlightAll();</script>The first two lines fetch the components of highlights.js from the cdnjs CDN. The third line calls the libraryâs entry point.
This configuration step is covered in the usage instructions page of the highlights.js documentation under âBasic usageâ > âIn the browserâ. The sample code there is generic and needs to be fleshed out with links to the latest version of the library. The third line of the sample code, <script>hljs.highlightAll();</script>, calls the library and I copied it as is to the blog header as the third line of my configuration snippet above.
The simplest way of linking to the latest version, which is what I did, is to fetch the library from a CDN such as cdnjs.
Section âFetch via CDNâ on the same instructions page includes a code snippet with up-to-date links, which shouldn't be used as is. Why? Because the snippet is vulnerable to downloading compromised versions of the library.
Subresource integrity can address this security issue. Hereâs how. I clicked the link next to the name of the cdnjs CDN, again on the same instructions page of the highlights.js documentation. The link leads to a cdnjs tool that generates code snippets with the appropriate digests for checking subresource integrity. The entries I needed are those for the files default.min.css and highlight.min.js.
I finally put together the three pieces in my snippet, i.e. the two lines for fetching the library and the third to call it, and added them to the blog header.
I prefer light designs, so the default light theme of highlights.js works well for me, blends nicely with the design of my Blogger theme, and requires no additional configuration. The library comes with dozens of light and dark themes, though.
Once configured, highlights.js requires no other setup, and the blog is ready to format code blocks with syntax highlighting.
Formatting code blocks
To add a block of code to a post, I paste it into the composer at the right spot. Then I switch to the HTML view and wrap the code within <pre><code> ... </code></pre>, taking care of removing any <p> tags. For longer blocks itâs easier to insert a placeholder where the block should be and paste the full code there in the HTML view, thus saving the effort of removing the tags.
Highlights.js auto detects the language and no further action is usually necessary.
Besides ordinary Python, there are a couple more options of the class attribute I can add to the <code> tag, one for profiler results and another for REPL sessions. For example, wrapping a block within <pre><code class="language-python-repl">...</code></pre> formats a block of a Python REPL session.
As for inline code, I still select it in the composer and apply the Courier font. This is good enough for running text, and it matches the way inline code is typically formatted without highlighting on other sites. Besides, highlights.js works only with blocks.
Code samples
To test highlights.js I applied syntax highlighting to the code blocks of an old post about Spacestills, a NASA TV still frame viewer I wrote in Python. The result looked good, so here is a longer Python example, the main function of Spacestills that runs the PySimpleGUI event loop:
def main():
"""Run event loop."""
window = sg.Window('Spacestills', LAYOUT, finalize=True)
current_still = refresh(window)
delta = DELTA
next_reload_time = datetime.now() + timedelta(seconds=delta)
while True:
event, values = window.read(timeout=100)
if event in (sg.WIN_CLOSED, 'Exit'):
break
elif ((event == '-RELOAD-') or
(values['-AUTORELOAD-'] and timeout_due(next_reload_time))):
current_still = refresh(window, values['-RESIZE-'])
if values['-AUTORELOAD-']:
next_reload_time = next_timeout(delta)
elif event == '-RESIZE-':
current_still = change_aspect_ratio(
window, current_still, current_still.new_size())
elif event == '-SAVE-':
filename = sg.popup_get_file(
'File name', file_types=[('PNG', '*.png')], save_as=True,
title='Save image', default_extension='.png')
if filename:
saved = save(current_still, filename)
if not saved:
sg.popup_ok('Error while saving file:', filename, title='Error')
elif event == '-UPDATE_DELTA-':
# The current cycle should complete at the already scheduled time. So
# don't update next_reload_time yet because it'll be taken care of at the
# next -AUTORELOAD- or -RELOAD- event.
delta, valid = validate_delta(values['-DELTA-'])
if not valid:
window['-DELTA-'].update(str(DELTA))
window.close()
del window
Next, letâs format a short function taken from Suite8080, a suite of Intel 8080 Assembly cross-development tools Iâm writing in Python. The function processes the mov mnemonic in the assembler:
# mov: 0x40 + (8-bit first register offset << 3) + (8-bit second register offset)
# mov m, m: 0x76 (hlt)
def mov():
check_operands(operand1 != '' and operand2 != '')
# 0x40 = 64
opcode = 64 + (register_offset8(operand1) << 3) + register_offset8(operand2)
pass_action(1, opcode.to_bytes(1, byteorder='little'))
Finally, some code in a different language. This is the source of the hello world Intel 8080 Assembly program for CP/M thatâs part of Suite8080:
; Hello world for CP/M
org 100h
bdos equ 0005h ; BDOS entry point
wstrf equ 09h ; BDOS function: write string
mvi c, wstrf
lxi d, message
call bdos
ret
message: db 'Greetings from Suite8080.$'
end
Highlights.js supports only ARM, AVM, and x86 Assembly but not Intel 8080 (too bad such a bleeding edge chip is missing), so this time it didnât detect the language. However, the result still looks reasonably good and Iâm ready to publish more code.
Python for Beginners
Python Decorators
Python provides us with many constructs for performing different tasks. While programming, sometimes we may need to modify the working of a function. But we may not be allowed to change the source code of the function as it might be in use in its original form in the program. In such cases, Python decorators can be used.
In this article, we will study what Python decorators are, how we can create decorators, and how we can use them to modify the functionalities of other functions in python.
What is a Python decorator?
Python decorators are functions or other callable objects that can be used to add functionalities to another function without modifying its source code. A decorator in python accepts a function as an input argument, adds some functionalities to it and returns a new function with the modified functionalities.
Implementation of decorators in python requires knowledge of different concepts such as first class objects and nested functions. First, we will take a look at these concepts so that we do not face problems in understanding the implementation of python decorators.
Concepts required to understand decorators
First class objects
In Python, first class objects are those objects that
- can be passed to a function as a parameter.
- can be returned from a function.
- can be assigned to a variable.
All the variables that we use in our programs are first class objects, whether it be a primitive data type, a collection object or objects defined using classes.
Here I want to emphasize that functions in python are also first class objects and we can pass a function as an input parameter or we can return a function from a function.
For Example, let us look at the following source code.
Here, we have defined a function add() that takes two numbers as input and prints their sum. We have defined another function random_adder() that takes a function as input, randomly generates two numbers and calls the input function add() with the randomly generated numbers as input.
import random
def add(num1, num2):
value = num1 + num2
print("In the add() function. The sum of {} and {} is {}.".format(num1, num2, value))
def random_adder(func):
val1 = random.randint(0, 10)
val2 = random.randint(0, 100)
print("In the random_adder. Values generated are {} and {}".format(val1, val2))
func(val1, val2)
# execute
random_adder(add)
Output:
In the random_adder. Values generated are 1 and 14
In the add() function. The sum of 1 and 14 is 15.
From the code and output, you can observe that the function add() has been passed to the random_adder() function as input and the random_adder() function calls the add() function that prints the output.
We can also return a function from another function or callable object. For instance, we can modify the above source code and define a function operate() inside the random_adder() function. The operate() function performs the entire operation done by the random_adder() function in the previous source code.
Now, we can return the operate() function from the random_adder() function and assign it to a variable named do_something. In this way, we will be able to execute the operate() function outside the random_adder() function by calling the variable do_something as follows.
import random
def add(num1, num2):
value = num1 + num2
print("In the add() function. The sum of {} and {} is {}.".format(num1, num2, value))
def random_adder(func):
print("In the random_adder.")
def operate():
val1 = random.randint(0, 10)
val2 = random.randint(0, 100)
print("In the operate() function. Values generated are {} and {}".format(val1, val2))
func(val1, val2)
print("Returning the operate() function.")
return operate
# execute
do_something = random_adder(add)
do_something()
Output:
In the random_adder.
Returning the operate() function.
In the operate() function. Values generated are 3 and 25
In the add() function. The sum of 3 and 25 is 28.
Nested Functions
Nested functions are the functions defined inside another function. For example, look at the following source code.
Here, we have defined a function add() that takes two numbers as input and calculates their sum. Also, we have defined the function square() inside add() that prints the square of the âvalueâ calculated in the add() function.
def add(num1, num2):
value = num1 + num2
print("In the add() function. The sum of {} and {} is {}.".format(num1, num2, value))
def square():
print("I am in square(). THe square of {} is {}.".format(value, value ** 2))
print("calling square() function inside add().")
square()
# execute
add(10, 20)
Output:
In the add() function. The sum of 10 and 20 is 30.
calling square() function inside add().
I am in square(). THe square of 30 is 900.
Free Variables
We know that a variable can be accessed in the scope in which it has been defined. But, In the case of nested functions, we can access the elements of the enclosing function when we are in the inner function.
In the above example, you can see that we have defined the variable âvalueâ inside the add() function but we have accessed it in the square() function. These types of variables are called free variables.
But why the name free variables?
Because it can be accessed even if the function in which it has been defined has completed its execution. For example, look at the source code given below.
def add(num1, num2):
value = num1 + num2
print("In the add() function. The sum of {} and {} is {}.".format(num1, num2, value))
def square():
print("I am in square(). THe square of {} is {}.".format(value, value ** 2))
print("returning square() function.")
return square
# execute
do_something = add(10, 20)
print("In the outer scope. Calling do_something.")
do_something()
Output:
In the add() function. The sum of 10 and 20 is 30.
returning square() function.
In the outer scope. Calling do_something.
I am in square(). THe square of 30 is 900.
Here, Once the add() function returns the square() function, it completes its execution and is cleared from the memory. Still, we can access the variable âvalueâ by calling the square() function that has been assigned to the variable do_something.
Now that we have discussed the concepts needed for implementing python decorators, Letâs dive deep and look how we can implement decorators.
How to create Python Decorators?
We can create python decorators using any callable object that can accept a callable object as input argument and can return a callable object. Here, we will create decorators using functions in Python.
For a function to be a decorator, it should follow the following properties.
- It must accept a function as an input.
- It must contain a nested function.
- It must return a function.
First, we will define a function add() that takes two numbers as input and prints their sum.
def add(num1, num2):
value = num1 + num2
print("The sum of {} and {} is {}.".format(num1, num2, value))
# execute
add(10, 20)
Output:
The sum of 10 and 20 is 30.
Now, we have to define a decorator in such a way that the add() function should also print the product of the numbers along with the sum. For this, we can create a decorator function.
Let us first define a function that takes the add() function as input and decorates it with the additional requirements.
def decorator_function(func):
def inner_function(*args):
product = args[0] * args[1]
print("Product of {} and {} is {} ".format(args[0], args[1], product))
func(args[0], args[1])
return inner_function
Inside the decorator_function(), we have defined the inner_function() that prints the product of the numbers that are given as input and then calls the add() function. The decorator_function() returns the inner_function().
Now that we have defined the decorator_function() and the add() function, let us see how we can decorate the add() function using the decorator_function().
Create Python Decorators By passing functions as arguments to another function
The first way to decorate the add() function is by passing it as an input argument to the decorator_function(). Once the decorator_function() is called, it will return inner_function() that will be assigned to the variable do_something. After that, the variable do_something will become callable and will execute the code inside the inner_function() when called. Thus, we can call do_something to print the product and the sum of the input numbers.
def add(num1, num2):
value = num1 + num2
print("The sum of {} and {} is {}.".format(num1, num2, value))
def decorator_function(func):
def inner_function(*args):
product = args[0] * args[1]
print("Product of {} and {} is {} ".format(args[0], args[1], product))
func(args[0], args[1])
return inner_function
# execute
do_something = decorator_function(add)
do_something(10, 20)
Output:
Product of 10 and 20 is 200
The sum of 10 and 20 is 30.
Create Python Decorators Using @ sign
A simpler way to perform the same operation is by using the â@â sign. We can specify the name of the decorator_function after the @ sign before the definition of the add() function. After this, whenever the add() function is called, It will always print both the product and sum of the input numbers.
def decorator_function(func):
def inner_function(*args):
product = args[0] * args[1]
print("Product of {} and {} is {} ".format(args[0], args[1], product))
return func(args[0], args[1])
return inner_function
@decorator_function
def add(num1, num2):
value = num1 + num2
print("The sum of {} and {} is {}.".format(num1, num2, value))
# execute
add(10, 20)
Output:
Product of 10 and 20 is 200
The sum of 10 and 20 is 30.
In this method, there is a drawback that you cannot use the add() function to just add the numbers. It will always print the product of the numbers along with their sum. So, choose the methodology by which you are going to implement the decorators by properly analyzing your needs.
Conclusion
In this article, we have discussed what python decorators are and how we can implement them using functions in Python. To learn more about python programming, you can read this article on list comprehension. You may also like this article on the linked list in Python.
The post Python Decorators appeared first on PythonForBeginners.com.
PyCon
PyCon US 2022 Call for Proposals is open!
Please make note of the important deadline for submissions:
All proposals are due December 20, 2021 AoE
We need beginner, intermediate, and advanced proposals on all sorts of topics. We also need beginner, intermediate, and advanced speakers to give said presentations. You donât need to be a 20-year veteran who has spoken at dozens of conferences. On all fronts, we need all types of people. Thatâs what this community is comprised of, so thatâs what this conferenceâs schedule should be made from.
For more information on where and how to submit your proposal, visit the main Speaking page on the PyCon US 2022 website.
We've provided some guidelines, tips, and advice on the types of proposals you can submit, so please be sure to check the following pages for more information:
Real Python
Writing Idiomatic Python
What programming idioms are unique to Python? This course is both a short overview for people coming from other languages as well as an introduction for programming beginners to the idiomatic practices within Python.
In this course, you’ll learn:
- How to access and interpret The Zen of Python
- How to set up a script
- How to test truth values
- How to swap variables in-place
- How to create Pythonic
forloops
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Codementor
How to Receive and Respond to Incoming SMS Messages in Python with Flask and Plivo
Sending an outbound message (https://www.plivo.com/blog/send-sms-in-python/?utmsource=codementor&utmmedium=external-blog&utmcampaign=receive-sms-in-flask&utmcontent=SMS) using the Plivo SMS...




