Skip to content

General state of performance discussion – Test and report performance comparison during build? #5248

Open
@stalgiag

Description

@stalgiag

How would this new feature help increase access to p5.js?

I explain this below.

Most appropriate sub-area of p5.js?

  • Accessibility (Web Accessibility)
  • Build tools and processes
  • Color
  • Core/Environment/Rendering
  • Data
  • DOM
  • Events
  • Friendly error system
  • Image
  • IO (Input/Output)
  • Localization
  • Math
  • Unit Testing
  • Typography
  • Utilities
  • WebGL
  • Performance

New feature details:

This is more of a meta-issue/discussion.

Performance feels like one of the biggest barriers to accessibility currently. I teach in a public school setting. Many of our computer labs have machines that have not been updated in well over a decade. Many of our students do not have their own computers at home so during the pandemic many were borrowing Chromebooks for schoolwork. Students would often be getting 5fps on some of p5's official examples while running video conferencing simultaneously. Many students reported extreme frustration and disappointment with running even relatively simple sketches.

For a while now, we have been emphasizing accessibility of source code for contributors but perhaps we need to put some systems in place to help us balance this with an eye towards performance. There are limits to how dramatically we can improve performance within the Canvas API without doing something drastic like rewriting the whole 2D renderer in WebGL ala PixiJS but I do think we could make significant improvements.

My feeling is that we can actually improve performance significantly by just having a way of comparing it during the PR process similar to codecov. Does anyone know of an effective way to accomplish this? If we could see something like 'push() takes 5ms longer to complete in this branch than in main. This is a 10% increase.' we would be able to make more informed decisions during the PR process. What do others think? Does anyone know of a tool that might help us with this? Is it possible for us to build some kind of reporting tool into our unit tests based on the 'time to execute' output that we already see?

PS this conversation has been started based on my own experience but is also in response to a large number of related issues. A quick glance reveals #5237 #4820 #3610. There are likely others.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions