Planet Python
Last update: May 13, 2020 10:46 AM UTC
May 13, 2020
CubicWeb
A roadmap to Cubicweb 3.28 (and beyond)
Yesterday at Logilab we had a small meeting to discuss about a roadmap to Cubicweb 3.28 (and beyond), and we would like to report back to you from this meeting.
Cubicweb 3.28 will mainly bring the implementation of content negotiation. It means that Cubicweb will handle content negotiation and will be able to return RDF using Cubicweb's ontology when requested by a client.
The 3.28 will have other features as well (like a new variables attributes to
ResultSet that contains the name of the projected variables, etc). Those
features will be detailed the release changelog.
Before releasing this version, we would like to finish the migration to heptapod, to make sure that everything is ok. The remaining tasks are:
- fixing the CI (there is still some random failings, that need further investigation)
- migrate the jenkins job that pushes images to hub.docker.com on heptapod, to make everthing available from the forge. It will be explicit for everyone when a job is done, and what is its status.
Beside of releasing Cubicweb 3.28, its ecosystem will also be updated:
- logilab-common, a new version will be released very soon, which brings a refactoring of the deprecation system, and annotations (coming from pyannotate)
- yams, a new version is coming. This version:
- brings type annotation (manually done, a carefully checked);
- removes a lot of abbreviation to make the code clearer;
- removes some magic related to a object which used to behave like a string;
The goal of these two releases, is to have type annotations in the core libraries used by CubicWeb, and then to be able to bring type annotation into CubicWeb itself, in a future version.
On those projects, some “modernisation” has been started too ; (fixing flake8 when needed, repaint the code black). This “modernisation” step is still on going on the different projects related to CubicWeb (and achieved for yams, and logilab-common).
In the medium term, we would like to focus on the documentation of CubicWeb and it's ecosystem. We do know that it's really hard for newcomers (and even ourself sometime) to understand how to start, what each module is doing etc. An automatic documentation has been released for some modules (see 1, 2 or 3 for instance). It would be nice to automatize the update of the documentation on readthedocs, update the old examples, and add new ones about the new feature we are adding (like content negotiation, pyramid predicates, etc). This could be done in team Friday's sprint or hackathon for instance. CubicWeb would also need some modernisation (running black ? and above all, make all files flake8 compilant…).
Regarding CubicWeb development, all (or, at least a lot of) cubes and Cubicweb related projects moved from cubicweb.org's forge to our instance of heptapod (4 and 5). Some issues have been imported from cubicweb.org to heptapod. New issues should be opened on heptapod, and the review should also be done there. We hope that will ease the reappropriation of the code basis and stimulates new merge-requests :)
To end this report, we like to emphasis that we will try to make a « remote Cubicweb meeting » each Tuesday at 2 pm. If you would like to participate to this meeting, it's with great pleasure (if you need the webconference URL, contact one of us, we will provide it to you). We also created a #Cubicweb channel on matrix.logilab.org ; feel free to ask for an invitation if you'd like to discuss Cubicweb related things with us.
All the best, and… see you next Tuesday :)
Codementor
Create your first web scraper with ScrapingBee API and Python
Learn how to use cloud based Scraping API to scrape web pages without getting blocked.
Brett Cannon
Thoughts on where tools fit into a workflow
I am going to admit upfront that this is a thought piece, a brain dump, me thinking out loud. Do not assume there is a lesson here, nor some goal I have in mind. No, this blog post is providing me a place to write out what tools I use when in my ideal development workflow (and yes, this will have a bias towards the Python extension for VS Code 😁).
While actively coding
The code-test-fix loop
Typically when I am coding I think about what problem I'm trying to solve, what the API should look like, and then what it would take to test it. I then start to code up that solution, writing tests as I go. That means I have a virtual environment set up with the latest version of Python and all relevant required and testing-related dependencies installed into it. I am also regularly running the test I am currently working on or the related tests I have to prevent any regressions. But the key point is a tight development loop where I'm focusing on the code I'm actively working on.
The tools I'm using the most during this time is:
- pytest
- venv (although since virtualenv 20 now uses venv I should look into using virtualenv)
Making sure I didn't mess up
Once code starts to reach a steady state and the design seems "done", that's when I start to run linters and to expand the testing to other versions of Python. I also start to care about test coverage. I put this off until the code is "stable" to minimize churn and the overhead cost of running a wider amount of tools and have to await their results which slow down the development process.
Now, I should clarify that for me, linters are tools that you run to check your code for something which do not require running under a different version of Python. If you have to run something under every version of Python that you support then that's a test to me, not a lint. This allows me to group linters together and run them only once instead of under every version of Python with the tests, cutting the execution time down.
The tools that I am using during this time are:
- coverage.py
- Black
- mypy
- I should probably start using Pyflakes (or
flake8 --ignore=C,E,W)
Running these three tools all the time can be a bit time-consuming. I have to remember to do it and they don't necessarily run quickly. Luckily I can amortize the costs of running linters thanks to support in the Python extension for VS Code. If I set up the linters to run when I save, I can have them running regularly in the background and not have to do the work they will ask me to do later. Since the results show up as I work without having to wait for a manual run it makes it much cheaper to run linters. Sames goes for setting up formatters (which also act as linters when you're enforcing style).
The problem is not everyone uses VS Code. To handle the issue of not remembering what to run, people often set up tox or nox which also has the benefit of making it easier to run tests against other versions of Python. Another option is you can also set up pre-commit so as to not forget and get the benefit of linting for other things like trailing whitespace, well-formed JSON, etc. So there's overlap between tox/nox and pre-commit, but also differentiators. This leads some people to set up tox/nox to execute pre-commit for linting to get the most that they can out of all the tools.
So tools people use to run linters:
- tox or nox
- pre-commit
But then there is also the situation where people have their own editors that they want to set up to using these linters. And so when using build tools like poetry and flit they have the concept of development dependencies. That way everyone working on the project get the same tools installed and they can set them up however they want to fit their workflow.
Proposing a change
When getting ready to create a pull request, I want the tests and linters to run against all supported versions of Python and OSs via continuous integration. To make things easier to debug when CI flags a problem, I want my CI to be nothing more than running something I could run locally if I had the appropriate setup. I am also of the opinion that people proposing PRs should do as much testing locally as possible which requires being able to replicate CI runs locally (I hold this view because people very often don't pay attention to whether CI for their PR goes green or not and making the maintainer have to message you saying your PR is failing CI adds delays and takes up time).
There is one decision to make about tooling updates. Obviously tools like the linters that you rely on will make new releases and chances are you want to use them (improved error detection, bugfixes, etc.). There are two ways of handling this.
One is to leave the development dependencies unpinned. Unfortunately that can lead to an unsuspecting contributor having CI fail on their PR simply because a development dependency changed. To avoid that I can run a CI cron job at some frequency to try and pick up those sorts of failures early on.
The other option is to pin my development dependencies (and I truly mean pin; I have had micro releases break CI because a project added a warning and a flag was set to make warnings be considered errors). This has the side-effect that in order to get those bugfixes and improvements from the tools I will need to occasionally check for updates. It's possible to use tools like Dependabot to update pinned dependencies in an automated fashion to alleviate the burden.
Tools for CI:
Preparing for a release
I want to make sure CI testing against the wheel that you would be uploading to PyPI (setuptools users will know why this is important thanks to MANIFEST.in). I want the same OS test coverage as testing a PR request. For Python versions, I will test against all supported versions plus the in-development version of Python where I allow for failures (see my blog post on why this is helpful and how to do it on Travis).
With testing and linting being clean, that leaves release-only prep work. I have to update the version if I haven't been doing that continuously. The changelog will also require updating if I haven't been doing it after every commit. With all of this in place I should be ready to build the sdist and wheel(s) and uploading them to PyPI. Finally, the release needs to be tagged in git.
Conclusion (?)
Let's take setting up Blak for formatting. That would mean:
- List Black as a development dependency
- Set up VS Code to run Black
- Set up pre-commit to enforce Black
- Set up tox or nox to run pre-commit
- Set up GitHub Actions to lint using tox or nox
What about mypy?
- List mypy as a development dependency
- Set up VS Code to run mypy
- Set up pre-commit to enforce mypy
Repeat as necessary for other linters. There's a bit of repetition, especially considering how I set up Black will probably be the same as all of my other projects and very similar to other people. And if there is an update to a linter?
- Update pre-commit
- Potentially update development dependency pin
There's also another form of repetition when you add support for a new version of Python:
- Update your Python requirement for build back-end
- Update your trove classifiers
- Update tox or nox
- Update GitHub Actions
Once again, how I do this is very likely the same for all of my projects and lots of other people.
So if I'm doing the same basic thing for the same tools, how can I cut down on this repetition? I could use Cookiecutter to stamp out new repositories with all of this already set up. That does have the drawback of not being able to update things later. Feels like I want a Dependabot for linters and new Python versions.
I also need to automate my release workflow. I've taken a stab at it, but it's not working quite yet. If ditched SemVer for all my projects it would greatly simplify everything. 🤔
May 12, 2020
PyCoder’s Weekly
Issue #420 (May 12, 2020)
#420 – MAY 12, 2020
View in Browser »
Under Discussion: The Performance of Python
Victor Stinner and Julien Danjou sat down (remotely, that is) with Anne-Laure Civeyrac to talk about Python’s performance. They discuss everything from profiling, why Python is slow, and projects aimed at improving Python’s performance. Check it out!
ANNE-LAURE CIVEYRAC
How to Move a Django Model to Another App
In this step-by-step tutorial, you’ll learn how to move a Django model from one app to another using Django migrations. You’ll explore three different techniques and learn some helpful guidelines for choosing the best approach for your situation and needs.
REAL PYTHON
Find Performance Bottlenecks in Python Code
“We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.” - Donald Knuth. Blackfire is built to let you find the 3%. Quick install, appealing and user-friendly UI. →
BLACKFIRE sponsor
Naomi Ceder to Step Down From PSF Board of Directors
Naomi Ceder will not be running for re-election to the PSF board of directors. In this blog post, she explains her reasons and thanks the community for the chance to serve.
NAOMI CEDER
Effortless Concurrency with Python’s concurrent.futures
Python’s concurrent.futures module is a high-level interface for the threading and multiprocessing modules. You can use it to effortlessly code asynchronous tasks!
REDOWAN DELOWAR • Shared by Redowan Delowar
Remapping Python Opcodes
Take a deep dive into .pyc files, opcodes, and disassembling code in this in-depth article about decompiling a .pyc module with obfuscated opcodes.
CHRIS LYNE
Calculating Streaks in Pandas
Identifying streaks can be useful when dealing with sporting statistics, app logins, and more. Learn how to calculate streaks in Python using the pandas library and visualize them using Matplotlib.
JOSH DEVLIN • Shared by Josh Devlin
Discussions
Which Characters Are Considered Whitespace by split()?
If you’re porting some Python 2 code to Python 3, you might want to check this out.
STACK OVERFLOW
The Python World Has Shown Increased Preference for Double Quotes
Is this the new tabs vs. spaces? Which do you prefer?
RAYMOND HETTINGER ON TWITTER
Python Jobs
Senior Python Engineer (Remote, East Coast Only)
Fullstack Software Engineer ML, Python (Remote)
Python Programmer (Remote)
Sr Python Developer Django, Flask, DevOps (Remote)
Articles & Tutorials
Faster Machine Learning on Larger Graphs: How NumPy and Pandas Slashed Memory and Time in StellarGraph
This week, StellarGraph released a new version of its open source library for machine learning on graphs. One of the most exciting features of StellarGraph 1.0 is a new graph data structure — built using NumPy and Pandas — that results in significantly lower memory usage and faster construction times.
HUON WILSON • Shared by Tim Pitman
Python eval(): Evaluate Expressions Dynamically
Learn how Python’s eval() built-in works and how to use it effectively in your programs. Additionally, you’ll learn how to minimize the security risks associated with the use of eval().
REAL PYTHON
Python WebDev Environment
Want to get your web application project started quicker? ActiveState’s WebDev build for Python has everything you need in a single, pre-built runtime environment, including Django, Flask and Bottle frameworks, as well as other useful tools and utilities. Get it for Windows, Mac and Linux. →
ACTIVESTATE sponsor
Systems Programming With bash and Python 3
Python’s portability, quick development time, and batteries-included philosophy make it an excellent choice for SysAdmins looking to automate command-line tasks.
KEN YOUENS-CLARK
Python Refactorings
Here are six ways you can refactor code to be more concise, more Pythonic, and more performant.
NICK THAPEN
RPP – Episode #8: Docker + Python for Data Science and Machine Learning
Docker is a common tool for Python developers creating and deploying applications, but what do you need to know if you want to use Docker for data science and machine learning? What are the best practices if you want to start using containers for your scientific projects? This week Christopher’s guest is Tania Allard, she is a Sr. Developer Advocate at Microsoft focusing on Machine Learning, scientific computing, research and open source.
REAL PYTHON podcast
Learn the Foundational Coding and Statistics Skills Needed to Start Your Career in Data Science
Interested in data science but not sure where to get started? Springboard’s Data Science Prep course was carefully crafted for go-getters ready for a challenge and need to brush up on a few basics before diving in to a data science bootcamp.
SPRINGBOARD sponsor
The 2020 Python Language Summit
“The Python Language Summit is a small gathering of Python language implementers (both the core developers of CPython and alternative Pythons), as well third-party library authors and other Python community members. The summit features short presentations followed by group discussions. In 2020, the Summit was held over two days by videoconference […]”
PYTHON SOFTWARE FOUNDATION
Improve Your Tests With the Python Mock Object Library
In this course, you’ll learn how to use the Python mock object library, unittest.mock, to create and use mock objects to improve your tests. Obstacles like complex logic and unpredictable dependencies make writing valuable tests difficult, but unittest.mock can help you overcome these obstacles.
REAL PYTHON video
Monitoring Python Flask Microservices With Prometheus
Learn how to set-up Prometheus on a Flask application to serve up metrics like requests-per-second, average response time, memory usage, and CPU usage.
VIKTOR ADAM
Projects & Code
Mimesis: Fake Data Generator
GITHUB.COM/LK-GEIMFARI • Shared by Isaak
Events
EuroPython 2020: Partial Speaker Lineup Released
EuroPython has released part of their speaker lineup for the conference, which is slated to take place online from July 23–26, 2020.
EUROPYTHON.EU
Happy Pythoning!
This was PyCoder’s Weekly Issue #420.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Python Engineering at Microsoft
Python in Visual Studio Code – May 2020 Release
We are pleased to announce that the May 2020 release of the Python Extension for Visual Studio Code is now available. You can download the Python extension from the Marketplace, or install it directly from the extension gallery in Visual Studio Code. If you already have the Python extension installed, you can also get the latest update by restarting Visual Studio Code. You can learn more about Python support in Visual Studio Code in the documentation.
In this release we addressed 42 issues, and it includes the ability to browse for or enter an interpreter path on selection. If you’re interested, you can check the full list of improvements in our changelog.
Ability to browse for interpreter path
To make selecting or changing interpreter easier, you now have the option to browse for a Python interpreter in your file system. You can also set an interpreter by manually entering its path:
Coming Next: moving python.pythonPath out of settings.json
One change that is coming relates to how the Python extension handles Python interpreter selection. Currently the path to the selected interpreter is stored in the workspace settings. This can be a problem when you share VS Code workspace settings in a GitHub repo, for example, as reported in our issue tracker.
In order to make the interpreter information system agnostic and prevent sharing the interpreter path (which commonly won’t be the same across different machines), we’re going to deprecate the python.pythonPath setting in the Python extension.
These changes will be added gradually as an experiment. If you’re interested to try it ahead of time, you can opt into this functionality by adding the following line to your User settings (View > Command Palette… and run Preferences: Open Settings (JSON)) and then reloading the window (View > Command Palette… and run Developer: Reload Window):
"python.experiments.optInto": ["DeprecatePythonPath - experiment"]
To see if you are part of an experiment, you can check the first lines in the Python extension output channel. If you wish to opt-out of A/B testing in general, you can open the user settings.json file and set the “python.experiments.enabled” setting to false.
We also have some additional announcements coming soon, so stay tuned!
Other Changes and Enhancements
We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. Some notable changes include:
- CVE-2020-1171: Do not perform pipenv interpreter discovery on extension activation. (#11127)
- CVE-2020-1192: Setting “Data Science: Run Startup Commands” is now limited to being a User scope only setting.
- Performance improvements when executing multiple cells in Notebook and Interactive Window using ipywidgets. (#11576)
- Fix for opening the interactive window when no workspace is open. (#11291)
- Update Jedi 0.17 (thanks Peter Law) (#11221)
Be sure to download the Python extension for Visual Studio Code now to try out the above improvements. If you run into any problems, please file an issue on the Python VS Code GitHub page.
The post Python in Visual Studio Code – May 2020 Release appeared first on Python.
Mike Driscoll
Learn How to Log with Python (Video)
Learn how to use Python’s logging module in this screencast:
You will learn about the following:
- Creating a log
- Logging Levels
- Logging Handlers
- Logging Formatters
- Logging to Multiple Locations
- and more!
The post Learn How to Log with Python (Video) appeared first on The Mouse Vs. The Python.
Real Python
Improve Your Tests With the Python Mock Object Library
When you’re writing robust code, tests are essential for verifying that your application logic is correct, reliable, and efficient. However, the value of your tests depends on how well they demonstrate these criteria. Obstacles such as complex logic and unpredictable dependencies make writing valuable tests difficult. The Python mock object library, unittest.mock, can help you overcome these obstacles.
By the end of this course, you’ll be able to:
- Create Python mock objects using
Mock - Assert that you’re using objects as you intended
- Inspect usage data stored on your Python mocks
- Configure certain aspects of your Python mock objects
- Substitute your mocks for real objects using
patch() - Avoid common problems inherent in Python mocking
You’ll begin by seeing what mocking is and how it will improve your tests!
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Software Foundation
Python’s migration to GitHub - Request for Project Manager Resumes
Timeline
- May 4 - Requests for resumes opens
- June 4 - Requests for resumes closes
- June 12 - Final decision will be made on proposals received
- June 22 - Work will begin
Submitting a proposal
Role description
Goal
Tasks
- Create a timeline for the project with the Python team and GitHub team
- Find out from the community the context behind GitHub search limitations, why bugs.python.org search is sometimes preferred.
- Research the Contributor License Agreement (CLA) process and how it can be achieved outside of bugs.python.org. Work with interested contractors, volunteers, and the PSF’s Director of Infrastructure on a solution.
- Work with GitHub’s migration team and Python’s community on how mapping of fields should work from bugs.python.org to GitHub
- Work with GitHub’s migration team on the transition from bugs.python.org and be the Python point of contact for GitHub. This includes helping field questions from GitHub to the Steering Council/core devs and vice versa.
- Assist the Python community with creating guidelines on how people are promoted to Python’s triage team.
- Obtain from GitHub a list of projects that have bots built that may help Python with "nosy lists"
- Oversee the creation of the new workflow on GitHub
- Assist with the creation of GitHub labels and templates when necessary
- Oversee the creation of the sandbox issue tracker on GitHub to experiment and test the new workflow
- Ensure that the sandbox received adequate testing from the Python team
- Update the devguide with that new process ahead of the migration and communicate it to the core developers
- Communicate with PSF staff on a regular basis when necessary and provide monthly reports via email.
Estimated budget
Necessary Skills
- Excellent time management skills
- Must be very organized, punctual, and detail-oriented
- Experience working with volunteers
- Excellent written and verbal communication
- Experience working with software development teams (remotely is a plus)
- Ability to balance demand and prioritize
- Experience working with GitHub
- Experience with GitHub APIs is a plus
- Experience working with Roundup is a plus
Questions?
Catalin George Festila
Python 3.8.3 : Create shortut and add python to the Context Menu.
The tutorial for today is a simple script for python user. If you run this python script in your folder you will get an python shortcut to the python and the python to the Context Menu in Windows 10. The Context Menu can be used with right click from your mouse, see the screenshot. This script can also be used with any executable if you make some changes. import os import pythoncom from
Talk Python to Me
#264 10 tips every Flask developer should know
Are you a web developer who uses Flask? It has become the most popular Python web framework. Even if you have used it for years, I bet we cover at least one thing that will surprise you and make your Flask code better. <br/> <br/> Join me as I speak with Miguel Grinberg about his top 10 list for tips and tricks in the Flask world. They're great!<br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Miguel on Twitter</b>: <a href="https://twitter.com/miguelgrinberg" target="_blank" rel="noopener">@miguelgrinberg</a><br/> <b>Miguel's blog</b>: <a href="http://blog.miguelgrinberg.com" target="_blank" rel="noopener">blog.miguelgrinberg.com</a><br/> <br/> <b>python-dotenv package</b>: <a href="https://pypi.org/project/python-dotenv/" target="_blank" rel="noopener">pypi.org</a><br/> <b>httpie package</b>: <a href="https://httpie.org/" target="_blank" rel="noopener">httpie.org</a><br/> <b>Quart</b>: <a href="https://pgjones.gitlab.io/quart/" target="_blank" rel="noopener">pgjones.gitlab.io</a><br/> <b>Talk Python episode on Quart</b>: <a href="https://talkpython.fm/episodes/show/147/quart-flask-but-3x-faster" target="_blank" rel="noopener">talkpython.fm</a><br/> <b>secure.py package</b>: <a href="https://github.com/TypeError/secure.py" target="_blank" rel="noopener">github.com</a><br/></div><br/> <strong>Sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring</a><br> <a href='https://talkpython.fm/linode'>Linode</a><br> <a href='https://talkpython.fm/training'>Talk Python Training</a>
Wing Tips
Moving the Program Counter in Wing's Python Debugger
This Wing Tip describes how to move the program counter while debugging Python code in Wing Personal and Wing Pro. This is a good way to go back and re-execute previously visited Python code, in order to trace through to the cause of a bug without having to restart the debug process.
To move the program counter, the debugger must be running and paused or stopped at a breakpoint. Then right-click on the target line in the editor and select Move Program Counter Here:

Shown above: Right-click to select Move Program Counter Here, then continue stepping with Step Over and Step Into in the toolbar.
Limitations: Due to the way Python is implemented, the program counter can only be moved within the current inner-most stack frame and it may not be moved within an exception handler, after an exception has been raised but not yet handled.
That's it for now! We'll be back soon with more Wing Tips for Wing Python IDE.
As always, please don't hesitate to email support@wingware.com if you run into problems or have any questions.
May 11, 2020
Daniel Roy Greenfeld
Two Scoops of Django 3.x Released
We just released the early release (alpha) of the fifth edition of our book, titled Two Scoops of Django 3.x. The 3.x means we are supporting Django 3.0, 3.1, and 3.2 Long Term Support (LTS) releases, ensuring the content will be valid until April of 2024.
So long as it is May 11, 2020, anywhere on planet Earth, the e-book version sells for just US$42.95!
On May 12th, 2020, the price goes up to $49.95. Hurry up and get your book!
For now, the e-book is just in PDF format and will be expanded to epub and mobi formats in the days to come. Readers of this alpha version get all the updates and have the opportunity to help us shape the direction of the book through their feedback, and to be credited as contributors.
If you bought the 1.11 e-book in 2020 you'll receive an email on May 11th with a discount code covering the cost of the new edition.
The book will also be printed, but for several reasons that won't happen until hopefully August of this year. When we get closer to that date we'll take pre-orders and send everyone who ordered an e-book a big discount code.
Due to popular demand, we are selling group licenses of Two Scoops of Django 3.x, in 10 developer, 50 developer, and 250 developer packages. These can be found as options in the product selection dropdown.
Learn more and order the 5th edition of our venerable series about Django best practices.
Podcast.__init__
Managing Distributed Teams In The Age Of Remote Work
More of us are working remotely than ever before, many with no prior experience with a remote work environment. In this episode Quinn Slack discusses his thoughts and experience of running Sourcegraph as a fully distributed company. He covers the lessons that he has learned in moving from partially to fully remote, the practices that have worked well in managing a distributed workforce, and the challenges that he has faced in the process. If you are struggling with your remote work situation then this conversation has some useful tips and references for further reading to help you be successful in the current environment.
Summary
More of us are working remotely than ever before, many with no prior experience with a remote work environment. In this episode Quinn Slack discusses his thoughts and experience of running Sourcegraph as a fully distributed company. He covers the lessons that he has learned in moving from partially to fully remote, the practices that have worked well in managing a distributed workforce, and the challenges that he has faced in the process. If you are struggling with your remote work situation then this conversation has some useful tips and references for further reading to help you be successful in the current environment.
Announcements
- Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
- When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With 200 Gbit/s private networking, node balancers, a 40 Gbit/s public network, fast object storage, and a brand new managed Kubernetes platform, all controlled by a convenient API you’ve got everything you need to scale up. And for your tasks that need fast computation, such as training machine learning models, they’ve got dedicated CPU and GPU instances. Go to pythonpodcast.com/linode to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
- You monitor your website to make sure that you’re the first to know when something goes wrong, but what about your data? Tidy Data is the DataOps monitoring platform that you’ve been missing. With real time alerts for problems in your databases, ETL pipelines, or data warehouse, and integrations with Slack, Pagerduty, and custom webhooks you can fix the errors before they become a problem. Go to pythonpodcast.com/tidydata today and get started for free with no credit card required.
- Your host as usual is Tobias Macey and today I’m interviewing Quinn Slack about his experience managing a fully remote company and useful tips for remote work
Interview
- Introductions
- How did you get introduced to Python?
- Can you start by giving an overview of the team structure at Sourcegraph?
- You recently moved to being fully remote. What was the motivating factor and how has it changed your personal workflow?
- What is your prior history with working remote?
- team practices for visibility of progress
- impact of remote teams on how code is written and organized
- reducing review burden by writing clearer code
- structuring meetings when remote
- points of friction for remote developer teams
- benefits of being fully remote
- incentivizing documentation
- compensation structure
Keep In Touch
Picks
- Tobias
- Quinn
- Skunkworks by Ben Rich
Closing Announcements
- Thank you for listening! Don’t forget to check out our other show, the Data Engineering Podcast for the latest on modern data management.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
- Join the community in the new Zulip chat workspace at pythonpodcast.com/chat
Links
- Sourcegraph
- Quinn’s Python Search Engine
- Sourcegraph Employee Handbook
- Gitlab
- Gitlab Handbook
- Zapier
- Zapier Guide To Remote Work
- Automattic
- Automattic Blog On Distributed Work
- Comments Showing Intent
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Real Python
Python eval(): Evaluate Expressions Dynamically
Python’s eval() allows you to evaluate arbitrary Python expressions from a string-based or compiled-code-based input. This function can be handy when you’re trying to dynamically evaluate Python expressions from any input that comes as a string or a compiled code object.
Although Python’s eval() is an incredibly useful tool, the function has some important security implications that you should consider before using it. In this tutorial, you’ll learn how eval() works and how to use it safely and effectively in your Python programs.
In this tutorial, you’ll learn:
- How Python’s
eval()works - How to use
eval()to dynamically evaluate arbitrary string-based or compiled-code-based input - How
eval()can make your code insecure and how to minimize the associated security risks
Additionally, you’ll learn how to use Python’s eval() to code an application that interactively evaluates math expressions. With this example, you’ll apply everything you’ve learned about eval() to a real-world problem. If you want to get the code for this application, then you can click on the box below:
Download the sample code: Click here to get the code you'll use to learn about Python's eval() in this tutorial.
Understanding Python’s eval()
You can use the built-in Python eval() to dynamically evaluate expressions from a string-based or compiled-code-based input. If you pass in a string to eval(), then the function parses it, compiles it to bytecode, and evaluates it as a Python expression. But if you call eval() with a compiled code object, then the function performs just the evaluation step, which is quite convenient if you call eval() several times with the same input.
The signature of Python’s eval() is defined as follows:
eval(expression[, globals[, locals]])
The function takes a first argument, called expression, which holds the expression that you need to evaluate. eval() also takes two optional arguments:
globalslocals
In the next three sections, you’ll learn what these arguments are and how eval() uses them to evaluate Python expressions on the fly.
Note: You can also use exec() to dynamically execute Python code. The main difference between eval() and exec() is that eval() can only execute or evaluate expressions, whereas exec() can execute any piece of Python code.
The First Argument: expression
The first argument to eval() is called expression. It’s a required argument that holds the string-based or compiled-code-based input to the function. When you call eval(), the content of expression is evaluated as a Python expression. Check out the following examples that use string-based input:
>>> eval("2 ** 8")
256
>>> eval("1024 + 1024")
2048
>>> eval("sum([8, 16, 32])")
56
>>> x = 100
>>> eval("x * 2")
200
When you call eval() with a string as an argument, the function returns the value that results from evaluating the input string. By default, eval() has access to global names like x in the above example.
To evaluate a string-based expression, Python’s eval() runs the following steps:
- Parse
expression - Compile it to bytecode
- Evaluate it as a Python expression
- Return the result of the evaluation
The name expression for the first argument to eval() highlights that the function works only with expressions and not with compound statements. The Python documentation defines expression as follows:
expression
A piece of syntax which can be evaluated to some value. In other words, an expression is an accumulation of expression elements like literals, names, attribute access, operators or function calls which all return a value. In contrast to many other languages, not all language constructs are expressions. There are also statements which cannot be used as expressions, such as
while. Assignments are also statements, not expressions. (Source)
On the other hand, a Python statement has the following definition:
statement
A statement is part of a suite (a “block” of code). A statement is either an expression or one of several constructs with a keyword, such as
if,whileorfor. (Source)
If you try to pass a compound statement to eval(), then you’ll get a SyntaxError. Take a look at the following example in which you try to execute an if statement using eval():
Read the full article at https://realpython.com/python-eval-function/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Catalin George Festila
Python Qt5 - Simple text editor with QPlainTextEdit.
I haven't played python in a long time. It can be seen after the last article. Today I installed Python 3.8.3 and my favorite PyQt5 and started to see how much I forgot from what I knew. Installing PyQt5 python mode was simple. pip3 install PyQt5 Collecting PyQt5 Downloading PyQt5-5.14.2-5.14.2-cp35.cp36.cp37.cp38-none-win_amd64.whl (52.9 MB) |████████████████████████████████| 52.9 MB 80
Julien Danjou
Interview: The Performance of Python

Earlier this year, I was supposed to participate to dotPy, a one-day Python conference happening in Paris. This event has unfortunately been cancelled due to the COVID-19 pandemic.
Both Victor Stinner and me were supposed to attend that event. Victor had prepared a presentation about Python performances, while I was planning on talking about profiling.
Rather than being completely discouraged, Victor and I sat down (remotely) with Anne Laure from Behind the Code (a blog ran by Welcome to the Jungle, the organizers of the dotPy conference).
We discuss Python performance, profiling, speed, projects, problems, analysis, optimization and the GIL.
You can read the interview here.
PyCharm
Interview: PyCharm helping developers to navigate complexity and be more productive
Developing in Python is the dream job of many developers out there. According to the latest survey from StackOverflow, Python is the second most ‘loved’ language, and the ‘most wanted’ one. Now, imagine working for a Python software house where you have the opportunity to be part of many different projects, sometimes even simultaneously? It sounds amazing, right? But, of course, it also brings some challenges. You need to communicate more, be aware of different project scopes, take decisions under different technology requirements, and much more. Complexity is also closely related to productivity. While some people excel in this scenario others spend more time finding their way around.
A type of tool that helps developers to find their way around and navigate complexity easily are IDEs (Integrated Development Environment). Today we will chat with two professional Python developers that, despite very different backgrounds, chose PyCharm as their main Python IDE. Łukasz Wyszomirski and Łukasz Ćwikliński work at STX Next, a European software house specialized in Python, and will share their experiences with us.
– Hi, nice to meet you both! Could you start by telling us a little bit about yourself and your journey to becoming a Python developer?

Łukasz Wyszomirski: I started my journey to become a programmer at university. Originally I wanted to be an accountant, but I ended up getting interested in programming. Back in secondary school, our teacher took us to an education fair and after learning more about it, I chose programming studies in Gdańsk. In my third year at university, I got my first job as a Ruby on Rails intern.
Łukasz Ćwikliński: After a couple of years as Head of Sales in a company of a dozen or so, I came to a realization that I’m no longer learning new things in that industry. I decided to look for something new and exciting. I quickly learned that IT is something I’m interested in—that’s how my journey with programming started. In my first year, I programmed in PHP, but quickly after that I moved on to Python.
– Can you describe your team at STX Next?
Łukasz W.: Currently I’m working in a team consisting of 5 members: a product owner, a QA specialist, 2 frontend developers, and one backend developer (me). I have two main roles. I’m responsible for building the entire backend in the project and, as I currently have the most experience in commercial projects I also help the other members of the team, even if I’m not an expert in a particular technology.
Łukasz Ć.: I have worked in several teams so far, and the size of each team was always different. It varied from being a one-man army to a team of 7. In the latter, there were such roles as a QA specialist, backend and frontend developers, a product owner, and DevOps. As for my role, I am a full-stack Python developer.
– What kind of projects do you usually work on?
Łukasz W.: I have experience in big projects from the health and banking sectors but I have also worked on MVP projects. Currently, my team is working on a project in the area of logistics. I can’t really talk about many of the projects we work on, but to give you an idea, you can check out our portfolio.
Łukasz Ć.: So far I have worked on some fintech projects, some internal projects (for instance the company intranet, website, and RMS), and on projects related to A/B tests.
– Among those projects, is there one (or more) that you are really proud of?
Łukasz W.: Working on projects in the health sector is especially gratifying. In my previous job, I had an opportunity to develop a platform for mobile surveys used in developing countries. It was an app used by workers who traveled to villages in those countries, gathering data from pregnant women and newborns. The data was then used to combat the problem of low childbirth survival of women and newborns.
Łukasz Ć.: Yes, I’m very proud of one of the fintech projects I worked on. It started about 2 years ago and it’s still ongoing. Many fintech projects have to be stopped because of problems with funds or competition, but this one is still going strong. On top of that, the code itself is interesting in this project, and the technology stack is very broad, which is a plus, too.
– What are the main challenges that you face while working on multiple projects?
Łukasz W.: The main challenge of working on multiple projects is having to switch the context between them, while still keeping up with the new requirements. However, that’s an issue only if the domain sectors of the projects are different. If the projects are in the same sector, it’s not that big of a deal. There’s also the issue of time and how to divide it to make sure everything is done efficiently.
Łukasz Ć.: I think that the greatest challenge is having to switch contexts between the projects. Another challenge is related to all the changes that happen in a given project when I’m not working on it. Sometimes when I spend a week focusing on one of the projects new features are introduced in the second one and, later on, when I go back to the first project, I need to find out what’s been happening there. It takes a while to catch up and get on the same page with everyone else.
– What technologies do you use for your Python development?
Łukasz W.: It depends on what we need in the given project. Currently, in each project I’m using Celery, a database, and the Django-rest-framework. When we need a good analytics system I tend to use the ELK stack. I also prefer Django over Flask. In terms of tools in general, I also use Docker containers. In all honesty, though, most of the tools I use are in PyCharm — the terminal, the connection to the database, the Python console — this way I have all that and more just one click away.
– PyCharm is your primary Python IDE, can you recall when you first decided to use it?
Łukasz W.: I mentioned before that my first programming job was as a Ruby on Rails intern. Back then I didn’t know which IDE was good, so my teammates suggested me to use RubyMine. Later on, I had to start programming in Java, so, as far as IDEs go, I had only one choice: IntelliJ. After around a year of that, I had an opportunity to start a new project which would be written in Python and another system, which I worked on in Java. Back at the university, I had a chance to check out other Python IDEs and I had bad memories of those, so after RubyMine and IntelliJ I wanted to try PyCharm.
Łukasz Ć: About 3.5 years ago I started working with STX Next, and I had the possibility to use part of my training budget on tools for my work. I’ve used JetBrains tools before, so this was a perfect opportunity to purchase the license.
– Which PyCharm features increase your productivity the most?
Łukasz W.: There’s a bunch of those. The main one I use a lot is definitely the debugger. It allows me to connect to the project and stop the application from running at a given point and verify the variables or contexts. I also appreciate the fact that I can configure the project in a way that lets me launch it from PyCharm, either with the debugger or not. I also use the file watchers a lot. That allows me to define which files PyCharm is supposed to watch, change, and launch linters. All of us try to keep good practices in mind when writing code, but this way I don’t have to worry about them too much — PyCharm will worry about them for me. For example, it can add commas if I end up forgetting about them. Finally, I appreciate that PyCharm provides the actual connection to the database.
Łukasz Ć: I’d say this is my list: project interpreter, open recent, jump to the declaration, setting server configuration, autosuggestion, and visual guides.
– Have you ever encouraged other members of your team to use PyCharm?
Łukasz W.: I’m the only Python developer in my current team, but before that using PyCharm was a standard in my team. A colleague from another team helped us in one of the projects recently and I recommended PyCharm to him. Now he uses it whenever he’s working with Python. In fact, as far as I know, when he works on the frontend he uses WebStorm — another JetBrains IDE, but for JavaScript.
– Thank you for all your answers! Is there anything else you would like to share with us today?
Łukasz W.: You guys are doing a really good job developing software, not only for Python. I had a chance to use your other products like RubyMine, IntelliJ, dotPeek, etc., and it was a good experience.
Łukasz Ć.: Keep doing a good job! I work with 2-3 tools that were developed by you guys. Sometimes I do frontend, and then I use WebStorm. I also use Java at university and then I stick to IntelliJ.
Read this blog post to learn more about what our interviewees and their colleagues think of the different IDEs in the market and why they chose PyCharm as number one.
About the company
STX Next is a European software house specialized in Python. Over 200 developers stand ready to empower different projects with extraordinary code and a results-driven Agile process. Apart from Python, STX is also an expert in JavaScript development. Their toolbox of frameworks includes Django, Flask, Angular, and React, each one chosen to create reliable solutions in short order. Their team consists of over 350 professionals, from software developers to UX designers, to automatic QA testers, to communication experts, all of them ready to ensure smooth cooperation with STX Next’s partners.
Codementor
🤖 Interactive Machine Learning Experiments
This is a collection of interactive machine-learning experiments. Each experiment consists of 🏋️ Jupyter/Colab notebook (to see how a model was trained) and 🎨 demo page (to see a model in action right in your browser).
Mike Driscoll
PyDev of the Week: Jan Giacomelli
This week we welcome Jan Giacomelli (@jangiacomelli) as our PyDev of the Week. Jan is an entrepreneur and blogs about Python. You can see what projects Jan contributes to over on Github.
Let’s spend a few minutes and get to know Jan better!

Can you tell us a little about yourself (hobbies, education, etc):
I’ve been programming for a while. I started when I was a high school senior – I made scraper for online betting webpages. After that, I studied electrical engineering and finished my MsC degree. Since my student years I’ve been working as a software engineer.
I’ve been training alpine skiing for almost a decade. After that, I also got a ski instructor license. Therefore in the winter ski centers are the place to go. I also love windsurfing and squash. Besides sports I also like to play guitar and cook.

Why did you start using Python?
At my first programming job there was a guy who really loved Python. He introduced me to it when we were working together on the project. I immediately loved it. Then I used it during my university studies for ML and math tasks. A also used for my master thesis where I was building a model of guitar amplifier
with neural networks. It was more natural to me than, for example, Matlab. I continue using it ever since.
What other programming languages do you know and which is your favorite?
I also worked in javascript, Java, PHP, C and C#. I enjoy the most when working with Python. While most of the time I work on ML projects I also use it the most. All in all I try to use the right tool for the job. I won’t use Python for real-time applications such as audio recording.
What projects are you working on now?
I am Chief software architect/engineer at typless. We are building AI services for data extraction from documents such as invoices, receipts, declarations, reports,… I write a blog with development stories and tutorials. I also tweet about software development.
Which Python libraries are your favorite (core or 3rd party)?
From 3rd party libraries I definitely love Django and Django REST framework. I also enjoy using scikit, keras, numpy.
How did you become an entrepreneur?

In May 2017 I attended DragonHack hackathon. In 24 hours we developed a mobile app. User was able to take a photo of the document (book page for example) and the app converted it into editable a Word document with the same layout (headers, paragraphs, positions, …). After that, we started thinking about starting our own company. Firstly, we got a project for some industrial optimization. We use money from this project to start developing an data extraction AI service. This service, called typless, can be now trained to extract data from any document.
Do you have any advice for other people who would like to start a business?
Firstly, you must strongly believe that you will succeed. Develop MVP as fast as you can and then start selling it. Sell as fast as you can and be creative at it. Don’t take investors who have only money – a startup is like a family.
Is there anything else you’d like to say?
The more you know about programming the less you think you know.
Thanks for doing the interview, Jan!
The post PyDev of the Week: Jan Giacomelli appeared first on The Mouse Vs. The Python.
The Three of Wands
Building Pyrseia III: Server Middleware, Client Senders, CLI and InApp Validators
This is the third article in the Pyrseia series. The others are:
If you want to follow along with the code, this article refers to commit 5abf2eda9be06b7417395a50ef676454bbd8f667.
Server Middleware
I've added the concept of server middleware to Pyrseia. Taking a page from aiohttp's book, each server middleware is basically a coroutine that gets called with 3 arguments: the current request context, a pyrseia.Call instance (which has the function name and arguments), and a coroutine to continue the chain. The type of this continuation coroutine is NextMiddleware, which is an alias for Callable[[CTXT, Call], Awaitable[Any]].
This gives middleware a very simple interface but a large amount of flexibility. Your middleware doesn't have to call the continuation coroutine, it can return a result or raise an error right then and there. Your middleware can also change the context and Call instance it received in whatever way it wants before passing them on.
A very simple logging middleware could look like:
async def logging_middleware(ctx, call: Call, next):
log.info(f"Processing {call.name}")
try:
return await next(ctx, call)
finally:
log.info(f"Processed {call.name}")
You pass in a list of middleware when creating the server.
from aiohttp import web
from pyrseia.aiohttp import create_aiohttp_app
from tests.calculator import Calculator
serv = server(Calculator, web.Request, middleware=[logging_middleware])
app = create_aiohttp_app(serv)
web.run_app(app)
Clients don't currently support middleware, but they should since it would be very useful for mixing in generic behavior. For example, the middleware could retry requests on certain errors or implement exponential backoff. Clients don't support request contexts though, so the API would be a tiny bit different.
Client Rearchitecture
I've refactored what used to be client adapters into two parts by extracting the logic for actually making the request. So now we have:
- the client adapters, which are async generators and esentially factories for framework-specific clients
- client senders, which are coroutines that are given a framework-specific client, a
Callinstance and the type of the response and are supposed to actually perform the call and return a result
The adapters are supposed to be written once per framework, so one for aiohttp, one for httpx and so on.
Senders you are supposed to customize and replace to your heart's content. The currently available adapters take optional senders as arguments.
An aiohttp sender is defined as: Callable[[ClientSession, Call, Type[T]], Awaitable[T]]. In other words, it's something that gets an instance of aiohttp.ClientSession, an instance of Call and is supposed to make the request and return an instance of T (we get the T from the method signature). The default sender included with the aiohttp adapter is simple enough to be shown here, inline:
from msgpack import dumps, loads
from cattr import Converter
converter = Converter()
async def s(session: ClientSession, call: Call, type):
async with session.post(
url,
data=dumps(converter.unstructure(call)),
timeout=client_timeout,
) as resp:
return converter.structure(loads(await resp.read()), type)
It basically uses msgpack and cattrs to prepare a payload and posts it to a URL. Then it reads the response, pulls it through msgpack and cattrs again and returns it. It's essentially 3 lines of code, and as such can be replaced very easily if you want to use ujson or any other serialization tech. (Note we haven't actually gotten to error handling yet, so that part's unspecified.) One of our next steps should be to introduce a similar receiver concept for servers.
Apple InApp Validation
Now that I've actually defined some fundamental concepts and written some code, let's see if all this effort actually survives contact with the outside world. Let's write some code to interface with Apple's and Google's systems for validating in-app purchases. Apple first.
Apple provides us with a simple HTTP endpoint, verifyReceipt, that we're supposed to POST a JSON payload to and it'll send us a JSON payload back. Let's model this as an interface with one method:
class AppStoreVerifier:
@rpc
async def verify_receipt(
self,
receipt_b64_data: str,
password: Optional[str],
exclude_old_transactions: bool,
) -> ResponseBody:
...
The ResponseBody class is omitted (over 100 lines of code, and that's with parts skipped), but you can take a look here. I've modeled it as a type-annotated attrs class so we can use cattrs to structure it up from JSON. I've also replaced a few of its fields with pendulum.DateTime instances, and set up structuring rules in cattrs to be able to convert a few of the JSON fields to DateTimes. Now all we need is a sender, and we can make calls.
Since the API only contains one method, the sender is fairly simple. You can take a look right here.
Interlude: A CLI Interface
Now that we've written our first real-world client, it'd be great if we could actually test it somehow. One way of trying it out is using the new asyncio shell available in Python 3.8 by using python -m asyncio, creating the client manually, and invoking one of its methods.
Another would be creating a small CLI utility for doing requests. I've done so in the pyrseia.__main__ module, using the Typer library. Here's a taste:
$ python -m pyrseia contrib.apple:create_verifier "verify_receipt('<receipt>', None, True)"
ResponseBody(status=<Status.SUCCESS: 0>, latest_receipt=None, latest_receipt_info=[], pending_renewal_info=[], receipt=ResponseBody.Receipt(<omitted>), environment=<Environment.PRODUCTION: 'Production'>, is_retryable=None)
The first argument is a coroutine that produces a pyrseia.Client. The second argument is a string that gets parsed, evaluated and invoked using said client. The result gets printed out, the client is closed, and that's that.
There's also the --interactive (or -i) flag, which will drop you down into a PDB session with the response available to you so you can tinker with it manually instead of just printing it out.
Google InApp Validation
Google's InApp interfaces are a little more complex than Apple's. They insist on you using their libraries (here) to access their APIs, which is probably a good idea cryptographically speaking. But their library doesn't support asyncio and would leave us with nothing to do in this section, so we're not doing that. ;)
First we need to deal with auth. Google will provide you with a service_account JSON file, containing a bunch of private cryptographic information. We use the data in that file to get a temporary token from their auth servers, valid for one hour. This call makes use of JWT, courtesy of the PyJWT library. We can then use that token to actually call their APIs.
Here's the API we're implementing (only two methods this time):
class GooglePlayDeveloperApi:
@rpc
async def get_purchases_products(
self, package_name: str, product_id: str, token: str
) -> ProductPurchase:
...
@rpc
async def get_voided_purchases(
self,
package_name: str,
start_time: Optional[int],
end_time: Optional[int],
token: Optional[str],
type: int,
) -> VoidedPurchasesResponse:
...
(Class definitions omitted for brevity.)
The client network adapter implementation is available over here.
The interesting thing about this client adapter is sharing state between invocation of the sender (the client adapter is basically a sender factory, remember?). The state we want to share is the access token, which is only valid for an hour. We define the sender in the actual client adapter function body, so it captures the context of the client adapter function body as a closure. A small Python trick is that closures don't actually capture free variables, but the entire enclosing context. This gives us an easy way of sharing the access token.
We also want to be good citizens and avoid a thundering herd problem, where we might refresh the access token multiple times from multiple requests when it expires. To avoid this we actually share two pieces of data: the access token itself and an optional in-progress asyncio task fetching it when it expires. asyncio makes this relatively easy to do without race conditions, due to it being single-threaded and every suspension point being obvious (due to it needing to be awaited).
Matt Layman
Episode 5 - How To Use Forms
On this episode, we will learn about HTML forms and Django’s form system to use when collecting input from users. Listen at djangoriffs.com. Last Episode On the previous episode, we looked at templates, the primary tool that Django provides to build user interfaces in your Django app. Web Forms 101 HTML can describe the type of data that you may want your users to send to your site. Collecting this data is done with a handful of tags.
May 10, 2020
Abhijeet Pal
How To Upload Images With Django
One of the most common requirement in any modern web application is the ability to take images or pictures from the users as input and save them on the server however Letting users upload files can have big security implications. In this article, we will learn how to upload images in a Django application. Uploading Images in Django Django has two model fields that allow user uploads FileField and ImageField basically ImageField is a specialized version of FileField that uses Pillow to confirm that a file is an image. Let’s, start by creating models. models.py from django.db import models class Image(models.Model): title = models.CharField(max_length=200) image = models.ImageField(upload_to='images') def __str__(self): return self.title The image column is an ImageField field that works with the Django’s file storage API, which provides a way to store and retrieve files, as well as read and write them. The upload_to parameters specify the location where images will be stored which for this model is MEDIA_ROOT/images/ Setting dynamic paths for the pictures is also possible. image = models.ImageField(upload_to='users/%Y/%m/%d/', blank=True) This will store the images in date archives …
The post How To Upload Images With Django appeared first on Django Central.
Python Software Foundation
The 2020 Python Language Summit
The Python Language Summit is a small gathering of Python language implementers (both the core developers of CPython and alternative Pythons), as well third-party library authors and other Python community members. The summit features short presentations followed by group discussions. In 2020, the Summit was held over two days by videoconference; questions were asked by a combination of voice and chat.
Summaries of all presentations will be posted here as they are completed.
Thanks to MongoDB for sponsoring the Python Language Summit.
Day 1
Should All Strings Become f-strings?Eric V. Smith
Replacing CPython’s Parser with a PEG-based parser
Pablo Galindo, Lysandros Nikolaou, Guido van Rossum
A Formal Specification for the (C)Python Virtual Machine
Mark Shannon
HPy: a Future-Proof Way of Extending Python?
Antonio Cuni
CPython Documentation: The Next 5 Years
Carol Willing, Ned Batchelder
Day 2
Lightning talks (pre-selected)
The Path Forward for Typing
Guido van Rossum
Property-Based Testing for Python Builtins and the Standard Library
Zac Hatfield-Dodds
Core Workflow Updates
Mariatta Wijaya
CPython on Mobile Platforms
Russell Keith-Magee
Lighting talks (sign-up during the summit)
Image: Natal Rock Python
CPython on Mobile platforms - Python Language Summit 2020

"We've got very big news on Android," Russell Keith-Magee told the Language Summit. "We're close to having a full set of BeeWare tools that can run on Android."
The BeeWare project aims to let programmers write apps in Python for Android, iOS, and other platforms using native UI widgets. Keith-Magee reported that BeeWare has made good progress since his Summit presentation last year. On iOS, "Python worked well before, it works well now," and BeeWare has added Python 3.8 support. Until recently, however, Python was struggling to make inroads on Android. BeeWare's Android strategy was to compile Python to Java bytecode, but Android devices are now fast enough, and the Android kernel permissive enough, to run CPython itself. With funding from the PSF, BeeWare hired Asheesh Laroia to port CPython to Android.
Read more 2020 Python Language Summit coverage.
A top concern for BeeWare is distribution size. Python applications for mobile each bundle their own copy of the Python runtime, so Python must be shrunk as small as possible. There have been proposals recently for a "minimum viable Python" or "kernel Python", which would ship without the standard library and let developers install the stdlib modules they need from PyPI. (Amber Brown's 2019 Summit talk inspired some of these proposals.) Keith-Magee said a kernel Python would solve many problems for mobile. He also asked for a cross-compiling
pip that installs packages for a target platform, instead of the platform it's running on. Senthil Kumaran observed, "BeeWare, MicroPython, Embedded Python, Kivy all seem to have a need for a kernel-only Python," and suggested they combine forces to create one.
To regular Python programmers, the mobile environment is an alien planet. There are no subprocesses; sockets, pipes and signals all behave differently than on regular Unix; and many syscalls are prohibited. TLS certificate handling on Android is particularly quirky. For the CPython test suite to pass on mobile it must skip the numerous tests that use
fork or spawn, or use signals, or any other APIs that are different or absent.Adapting CPython for life on this alien planet requires changes throughout the code base. In 2015 Keith-Magee submitted a "monster patch" enabling iOS support for CPython, but the patch has languished in the years since. Now, he maintains a fork with the iOS patches applied to branches for Python 3.5 through 3.8. For Android, he maintains a handful of patch files and a list of unittests to skip. Now that Android support is maturing, he said, "We're in a place where we can have a serious conversation about how we get these changes merged into CPython itself."
A prerequisite for merging these changes is mobile platform testing in CPython's continuous integration system. Currently, Keith-Magee tests on his laptop with several phones connected to it. As he told the Summit, he's certain there is a CI service with physical phones, but he has not found it yet and hasn't invested in building one. He develops BeeWare in his spare time, and CI is not the top priority. "Funding is one thing that makes stuff happen," he said. He thanked the PSF for the grant that made Android support possible. Mobile Python suffers a chicken-and-egg problem: there is no corporate funding for Python on mobile because Python doesn't support mobile, so there is no one relying on mobile Python who is motivated to fund it.
Keith-Magee asked the Summit attendees to be frank with him about bringing mobile Python into the core. He asked, "Do we want this at all?" If so, the core team would have to review all patches with their mobile impact in mind, as well as reviewing mobile-specific patches. "What is the appetite for patches that require non-mobile developers to care about mobile issues?" The decision would involve the whole core team and many community discussions. Guido van Rossum endorsed good mobile support long-term. So did Ned Deily, adding, "To actually do it will require money and people. Bigger than many other projects."
EuroPython
EuroPython 2020: First part of the program available
Our program work group (WG) has been working hard over the last week to select the first batch of sessions for EuroPython 2020, based on your talk voting and our diversity criteria.
We’re now happy to announce the first 60 talks, brought to you by 61 speakers.

We will have over 80 sessions for EP2020
Tomorrow, we will open the second CFP to fill the additional slots we have added for the Americas, India/Asian/Pacific time zones. This will then complete the program for EP2020, with over 80 sessions by more than 80 speakers waiting for you — from all over the world !

Waiting List
Some talks are still in the waiting list. We will inform all speakers who have submitted talks about the selection status by email.
Full Schedule
The full schedule will be available shortly after we have completed the second CFP, later in May.
Conference Tickets
Conference tickets are available on our registration page. We have simplified and greatly reduced the prices for the EP2020 online edition.
As always, all proceeds from the conference will go into our grants budget, which we use to fund financial aid for the next EuroPython edition, special workshops and other European conferences and projects:
We hope to see lots of you at the conference in July. Rest assured that we’ll make this a great event again — even within the limitations of running the conference online.
Enjoy,
–
EuroPython 2020 Team
https://ep2020.europython.eu/
https://www.europython-society.org/






