Category Archives: Uncategorized

Lightstream Labs Features Move into Studio

Starting today we’ll be retiring our experimental feature section, Lightstream Labs. Labs was designed to be an environment to test experimental new features. With some time out in the wild, we’re excited to be rolling the majority of these features directly into Lightstream Studio! Here’s what this change means going forward:

What’s Staying?

The majority of Lightstream Labs will be moving into Studio as native features that don’t need to be toggled on. These features will be RTMP Sources, RTMP Destinations, and 3rd Party Integrations. To access them, simply use the green Add Layer button as you normally would (or the stream destinations drawer for RTMP Destinations).

What’s Leaving?

With this transition, we will be removing the ability to add Video Clips. This feature wasn’t quite up to snuff with what we’d like performance-wise so we’ll be taking it out of rotation for some tune ups.

What will happen to my Video Clips?

We won’t be removing existing video clips from your project, but once deleted they’ll be unable to be added again to your scenes. If you’d like to continue using any existing video clips you’re free to do so, but we will be unable to restore any deleted video file layers once they’re gone.

But wait there’s more

We know that some streamers rely on video clips, so we do have a small workaround for those that still need this functionality. By using our 3rd Party Integration option and adding a Player.me integration to their scene, users can still add any video clips they’d like onto their player.me overlay itself and have the videos play on their stream! You can find full details on what player.me supports here.

Advertisements

Lightstream’s Live Streaming Analytics Platform, Arsenal.gg, to Offer New Game Key Delivery Campaigns

Key Delivery to streamline managing game keys benefiting Broadcasters, Publishers and Indie Developers

CHICAGO, June 5, 2019 — Lightstream, an innovator in live streaming technology, today announced its recently acquired platform Arsenal.gg, the industry-leading discovery and analytics tool for live video game streaming content, will add automated game key delivery for game publishers and developers.

“Thousands of verified broadcasters use Arsenal.gg to understand how their content is being received by their audience,” said Stu Grubbs, Lightstream CEO. “Key delivery allows a publisher or studio to focus on their game while running a large influencer campaign so broadcasters of all sizes discover their game authentically. Broadcasters are able to request instant access to games that they are genuinely interested in, studios and publishers get all the data they need, and we handle the rest.”

Benefits to Broadcasters

Broadcasters can discover new games they’re interested in creating content around. A streamlined application process creates more opportunities for broadcasters of all sizes to reach out to game studios. Broadcasters can manage their previous requests, check status, and ensure they’re not duplicating requests.

Benefits to Publishers

Arsenal streamlines the entire process of managing game keys – automating the process and saving community and influencer managers time and effort. Because every applicant must authorize their streaming account, publishers can be confident that the broadcaster applying is who they say they are.

 Arsenal saves developers and publishers time by providing:

  • In-depth broadcaster metrics: Every applicant’s statistics are instantly available within Arsenal – saving the time of exploring each broadcaster’s channel individually and guessing at their audience size and engagement.
  • Game key management: Uploaded keys will be tracked automatically by Arsenal,  eliminating the need to manage spreadsheets to avoid sending duplicate keys to broadcasters.
  • Automated approval: Set minimum eligibility requirements to instantly approve and send a game key to qualified broadcasters. Broadcasters that don’t meet the automatic approval criteria are placed in a queue, so developers can quickly review on a case-by-case basis.
  • Dynamic reporting: A reporting dashboard is automatically created that tracks every broadcast using keys delivered with Arsenal.gg. to see how much content and views are being generated for every key delivery campaign.

Benefits to Indie Developers

Qualified indie developers are able to create a Key Delivery campaign for free. This helps accelerate the growth of smaller studios by improving discoverability by influencers and their audience. It also augments their team, saving time from manually managing, verifying, and emailing game keys.

Current Key Delivery Campaigns

World-class game publishers, indie game developers, and brand leaders are among Arsenal’s impressive roster of industry clients and include Raw Fury, 505 Games, and Team17.

Broadcasters can log in now and view active Key Delivery campaigns:

These clients have access to data across thousands of games and millions of broadcasters. They can search across over 8 Million broadcasters by games played, language, affiliation, or popularity across the six most popular streaming platforms – Mixer, Twitch, Facebook, YouTube, Mobcrush and Smashcast.

Arsenal’s streamer analysis can incorporate broadcast details with breakdowns by game, usual time and day of broadcasts, growth and viewership metrics, active days per week, and averages of stream hours by day, minutes watched and viewers per broadcast.

For brands and developers interested in a demo or a meeting with Lightstream about the Arsenal platform during E3, please reach out to Reed Scarfino at reed@arsenal.gg and Jeff Royle at jeff@arsenal.gg.

Visit https://app.arsenal.gg/campaigns to browse Key Delivery campaigns. For more information and to sign-up for Arsenal’s free analytics platform please visit, https://arsenal.gg. Be sure to follow Lightstream on Twitter and at https://www.golightstream.com.

What’s new in OBS Studio 20.0

OBS Studio version 20.0 has landed! A major release is always accompanied by new features, updates, bug fixes, and more. The full patch notes can be found here. In this post, I will be going over the major feature additions, source updates, general additions, and a few bug fixes. This will hopefully be the first of many more informative posts to come!

New Features

Modular UI

When you first launch OBS, it might not look that much different:

OBS First Launch

But if we take a peek at the View menu…

View Menu

We see some interesting new options! To take advantage of this awesome new UI, first you need to unlock it by un-checking “Lock UI” from the View menu. Now, you can see that there are a few more icons that weren’t there before on the different sections of the main OBS window.

Scenes Unlocked

There is a now an undock and close icon in the title bar for all UI elements that can be adjusted. If you click and drag this title bar, you can slide the objects around to anywhere you want in the OBS window. You can even combine two objects into a single space, which then allows you to tab between them.

Moving Scenes

If the undock button is clicked, that section of the UI will pop out into its own window, which can be moved and resized however you like. Try it out!

All Objects Undocked

To move any undocked window back inside the main OBS window, simply drag it back where you want it. They can be moved to any location, on any side of the preview, including above and below it.

Redocking

You can even hide any objects that you no longer wish to see by clicking the X icon. But don’t worry, if you accidentally close them, you can reopen them from the View menu or by right-clicking on the title bar of any other object in the UI. You can also toggle them off this way.

Closing and opening

We hope that you enjoy this new level of customization that is now possible in OBS. Don’t forget that once you have things set the way you like, you can lock everything into place from the View menu to avoid accidentally moving something around. If an object is already undocked, however, locking will only prevent it from being closed. You can move it around and it can still be docked back to the main window.


New theme

The next new feature is something that I am personally quite fond of. A new theme! Gone are the days of boring black and white or white and black, followed by some blue and maybe a hint of green. Check out the new theme, called Rachni.

Rachni Theme

In addition to making OBS much more pleasant to look at, the theme itself is very well documented and should be a great base for other users to start creating their own themes. It’s really quite simple, and nearly all the objects you would want to change are listed in the theme itself with comments. Check it out, and be sure to share on the forums anything you come up with. Currently, there are a few minor known issues with the addition of the Modular UI, but they will be addressed shortly as time allows.

The new theme can be changed in Settings -> General in OBS, from the Theme drop down list.

Theme Select

Defaults button in filters/sources

All sources and filters now have a Defaults button, which will reset all the settings back to their default values. This one might seem like a minor change, but it’s been highly requested and can be very useful when testing out new settings on any sources or filters. Now you don’t have to delete and re-add, when a simple reset to defaults would accomplish the same thing.


Source locking

Ever been adjusting your scene layout only to accidentally misclick and move the wrong image, totally screwing up the 15 minutes you spent getting it into the absolutely pixel perfect position? Well no more! Now all sources in OBS can be locked in place, preventing them from being moved in the preview window. You will see a new Lock icon next to each source in the list, and just like with visibility toggle, you just click on it to lock or unlock. Locking will not prevent you from deleting a source, so still be wary of the delete key.

  • Unlocked: 
  • Locked: 

Preview Zooming

Often times, we can’t have the OBS window set to show the true size of what we are capturing. Even on a 1080p display, showing a 1080p source in the OBS window will be slightly scaled. To get around this, there are scaling options for the preview itself which can be accessed by right-clicking on the preview window.

Preview Scaling Options

If a scaling option other than the default of Scale to Window is selected from the Preview Scaling menu, the preview will show in the actual size of either the Canvas resolution (the amount of space in the preview itself to place sources) or the Output resolution (what your viewers/recording will see). As you can see here, capturing a 1080p window and then viewing it in the default preview means I can’t see any of the text, and it would be hard for me to tell if there were any issues with the actual readability of the output.

Unscaled Preview

However, I can change the preview scaling to show the actual output size, and suddenly:

Scaled Preview

Now you can see exactly what the output will be. It used to be annoying to change between these different scaling sizes, and now you can simply hold the space bar and zoom in and out (a hand will appear to indicate you can) with your mouse scroll wheel, in addition to being able to pan around the preview to view any area you like by clicking and dragging. It should be noted that this is changing how it looks to you, and not to the stream/recording. If you don’t have the hand icon when you hold space bar, make sure you set either the canvas or output scaling mode, and it’s not still set to scale to window.

Zoooooooom!

Audio clipping visual notification

Often times it’s hard to keep track of audio levels when streaming or recording. In OBS 18.0.0, the Audio Monitoring feature was added to allow you to keep track of sources and their levels while in use. Sometimes, however, those sources could peak out and you wouldn’t notice with just audio monitoring alone (due to gain filters or other audio adjustments along the way). Now, any peaking audio will change the bar of the mixer volume level to a red color, to indicate it’s peaking.

Peaking Audio

Stinger transitions

Another new feature this version is the ability to do Stinger Transitions. For those not up to speed with the industry terminology, a stinger transition can be easiest explained by the following:

Kabooooom!

So, what’s going on here? It’s pretty simple to set up. First, you’ll want to get a video file that has transparency (technically not required, but strongly encouraged). Then, we add the new transition and name it what we want. We can now select the source video file and the exact moment during the video that we want the transition (a cut) to actually occur. This is usually timed to be the moment the entire screen is filled with the stinger. In this example, I knew that 2400ms (2.4 seconds) into the explosion animation, the whole screen is filled with smoke and it masks the actual cut. This makes for a nice, smooth, animated transition. You can also change the transition timing to happen on a specific frame of the video, instead of being time based. This can really help you fine tune your stingers.


FTL support

Lastly, but certainly not least, Microsoft and the team at Mixer have been working hard to bring their FTL streaming protocol technology natively in to OBS. First introduced as part of Mixer (formerly known as Beam), FTL is a streaming protocol that allows for sub-second latency to your viewers. That means your streaming experience will be more like you are sitting next to them as they watch you play, rather than having to deal with pesky service delays for your stream. And now you can use FTL from the main OBS client, without needing a separate install for the FTL-enabled version. Very cool stuff!

Currently, the FTL protocol is only supported by the Mixer platform. To enable it, just select “Mixer.com – FTL” from the services list, and then set up your stream key as you would normally. OBS will take care of the rest.

You can check out the Mixer platform itself at mixer.com.


Source Updates

Several sources received updates this version, some major, some minor.

First up, the VLC source has a new option to allow you to select how much network caching will be used for any network-based sources (i.e. streams, ipcam feeds, etc.). It can be found at the bottom of the source properties.

VLC Network Caching Option

The Decklink/Blackmagic source gets another nice feature to follow up from the audio channel updates in version 19.0. This time, we finally have auto-detection of video formats! No more fiddling with the giant list of supported formats hoping to stumble on the correct options. I can personally attest that this new source feature works like a charm.

Finally, and the most significant of the source updates this patch, is the Image Slideshow source. There are a ton of new features here.

  • Ability to hide the source or disable looping after all images are played
  • Options to select visibility behavior.
    • Stop when not visible, restart when visible
    • Pause when not visible, unpause when visible
    • Always play even when not visible
  • Hotkey controlled mode

The last option has been long requested, and now it is finally possible to manually control the image slideshow with hotkeys. You can toggle play/pause, restart, stop, show next slide, and show previous slide.

General Updates

There were quite a few general usability and quality of life updates in this version. All updates can be found in the full patch notes, but here are a few that stand out.

In OBS 19.0, a change was implemented that warned users when they were launching OBS twice, as this was usually a mistake and not intended. Due to the requests of several users who use multiple instances in their workflow, we have added a launch flag that suppresses this warning. Just append “–multi” to the shortcut or command when launching OBS.

Fullscreen Projector options have been added to the tray icon for the preview, so they can be quickly accessed when OBS is hidden to the tray.

Twitch server selection has been updated to include an Auto option. This option will automatically test and select the closest Twitch ingest server for you to use at that time, leveraging an API provided by Twitch themselves. Remember, closest may not mean best at that particular time.

The AMD AMF plugin has also been updated this patch. This update brings full compatibility with the latest 17.2.2 driver from AMD, as well as lots of bug fixes, performance enhancements, and updates for HEVC recording and the default presets. Full patch notes for AMF can always be found here: https://obsproject.com/forum/resources/amd-advanced-media-framework-encoder-plugin-for-obs-studio.427/updates

Bug Fixes

As with any update, there are a lot of small bug fixes, and they can be found in the full patch notes. Some of them are a bit more interesting than others, and I’ll explain a few of the more noticeable fixes that have been finished.

Settings window size fix

Up until now, the OBS settings window has had a fixed minimum height and width, which is slightly larger (length wise) than a 720p display. While most displays are 1080p or higher, there are still quite a few people using 720p displays as secondary displays to keep an eye on OBS, or other reasons. In OBS 20.0, the minimum size of the settings window has been reduced to 700×512 to accommodate smaller displays. This was determined to be the smallest size that the settings window could be without interfering with usability.

Unsupported GPU crash

In a few rare cases when trying to launch OBS on older hardware that did not support the minimum requirements for OBS, it would simply crash at startup instead of providing the proper “Failed to initialize video” message that has a better explanation for the user on what the issue was. This was frustrating for both end users and for the community support helpers because a crash generally indicates there’s something that can be fixed, when in this case it was simply unsupported hardware. The proper message is now displayed.


Final Thoughts

We here in the OBS Community thank everyone for your continued support of the project. We can’t wait to see the cool things you all create with the new features being added this version. If you have any issues, questions, or need help with anything, our forums and chat are open 24/7. Though, we do need sleep sometimes, so someone might not be around right away.

Happy creating!

Streaming with x264

Preamble

So, you want to learn more about video encoding? How to set up your stream for the best quality given your computer’s hardware and connection limitations? Let’s start with this video by Tom Scott.

He does a great job of giving a quick primer on how video encoding works, and you will hopefully have a better understanding of the topics and terminology that we’ll be going over. All done? Great! Let’s get started.

Before we get into the details, let me explain what this guide is not. This is not intended to be a fully detailed technical explanation of how x264 works; there are far better guides out there than what I can provide here. If you’re interested in the nitty-gritty, head over to the doom9 forumsFFmpeg docs, or the x264 website and start digging. This is also not intended to be a “best settings” guide, and I will not recommend any specific settings. This is intended to help you understand how video encoding in general works, and how to better identify potential issues with your settings and help you learn where to look to correct them.

Let me reiterate that there is no such thing as “best settings”. Every single setup, for every single use case, will be different. As an example, I have 3 different sets of streaming encoding settings for the types of media I stream. One for fast motion games, one for desktop applications, and another for live video. If you are new to OBS or streaming in general, OBS Studio contains a feature known as the “Auto-Configuration Wizard” which can be found in the Tools menu. This tool will test your system and your internet connection to determine what it can handle from both an encoding standpoint and a connection stability standpoint. However, the best way to find your best settings is to test, test, and test again.

This guide is focused entirely on streaming with the x264 encoder. This is what the vast majority of OBS users will be using when they stream. For local recordings, your choice of encoder is far less relevant than your actual settings and in many cases a hardware encoder will be better suited for you. You can learn more about local recording settings in this guide here: http://obsproject.com/forum/resources/obs-studio-high-quality-recording-and-multiple-audio-tracks.221/

It is important to understand that video encoding is a very resource intensive process, especially when attempting to do so in real-time. Hardware encoders – such as Nvidia NVENC, Intel QuickSync, or AMD VCE – can help with this as they use special hardware in your system dedicated to the task of video encoding. As a trade-off the overall quality per bitrate is lower than the CPU-based x264 in nearly all cases. For streaming where bitrate is usually a constraining factor, x264 is currently the best option for getting the most quality out of your stream.

Only been in recent years have standard consumer-grade computers reached the point where they can realistically provide the processing power to do live video encoding. Keep all this in mind when you wonder why your 8 year old dual core Pentium CPU cannot encode 1080p 60fps without failing miserably. Even the most powerful consumer CPUs can still struggle with the load of encoding a high-resolution, high-fps stream.

There are two primary components to the x264 encoder we’ll be looking at: presets and bitrate.

Presets

x264 has several CPU presets, in increasing order from low CPU usage to high CPU usage: ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, veryslow, placebo.

By “preset”, it means exactly what it sounds like: a set of pre-determined settings for x264 so that you don’t have to set them all manually yourself to tweak things. These sets of settings have been tested by lots of people and are great for general use, depending on what you want to get out of your encoder. The actual details of what the settings are can be found here: http://dev.beandog.org/x264_preset_reference.html

The basic idea is that, all things being equal (same bitrate, etc), less CPU usage would result in worse quality, and more CPU usage would result in better quality, because the presets change how much time the encoder spends compressing each frame to look good within its setting constraints. Sometimes you need to reduce your CPU usage in order to get good performance, and the higher CPU usage presets can be difficult to use effectively with average consumer CPUs.

The last thing to note is that any preset lower than medium will have significant diminishing returns, and is not really worth the extra CPU cycles for streaming scenarios. Unless you are squinting at two identical streams side by side, you will not notice a difference. That said, if your CPU can handle it, there’s no reason (outside your power bills) not to use them.

Here we have put together some comparison examples for how this actually looks in practice.

All these test are performed with exactly the same source video and bitrate, only the preset has changed.

However, it’s important to note that all of this assumes that your PC can handle the preset and resolution/fps that you are trying to encode. If it cannot, you might start to notice skipping or image distortion on your stream, accompanied by the message in OBS stating: Encoding overloaded! The simple way of fixing this issue is to turn down the resolution and FPS of your stream to reduce the load, and failing that, you may need to turn down the preset. You can check our detailed guide on how to troubleshoot encoding issues here: https://obsproject.com/wiki/General-Performance-and-Encoding-Issues

Bitrate

The amount of energy the CPU spends compressing each frame isn’t the only factor in video quality. Bitrate is also important, as it determines how much information you can put into each frame of video. If you are allowed to cram more data into each frame, you don’t need lots of CPU spent on compression, so you can make each frame look better just by cranking up the bitrate. Remember how the Tom Scott video looked when he simulated lowering bitrate, with all other settings left the same? The same is true the other way. If you increase bitrate, you can make the video look better quality.

Thus, you can get a good-looking video with relatively low CPU usage by using a low-CPU usage preset (like superfast) with a higher bitrate. Just note that the amount of bitrate you’ll need for this can vary greatly depending on the resolution and FPS you are trying to stream at. A 1080p 60fps stream at only 4000kbps bitrate using the ultrafast preset is not going to look very good. For reference, the YouTube encoding settings list is a great place to start. The list below differs slightly, and would be my personal recommendation as a starting point.

ResolutionBitrateFPS
853×480800 – 1200 kbps30
1024×5761000 – 3000 kbps30
1280×7203000 – 5000 kbps30
1920×10805000 – 8000 kbps30
2560×14408000 – 12000 kbps30
3840×216012000 – 20000 kbps30

They are assuming x264 encoder with the veryfast preset, and low to medium motion in your scene. For 1080p 60fps in a high-motion scenario (like an action or FPS game), you would likely need more than 8,000kbps of bitrate at veryfast for it to look smooth during playback. Conversely, low-motion video (such as an RTS game or streaming Photoshop art creation) can work with much lower bitrate. These charts are intended to give you an idea on where to start.

The end result of using a lower encoder preset and upping the bitrate will probably look a bit different then comparable bitrate at higher presets, but the goal is to get roughly the same quality by trading off CPU usage for bandwidth. I recommend trying both and see which one works for you in terms of quality, CPU usage, and what your connection and streaming service can handle.

For comparison, here is the same scene encoded using x264 veryfast in both a low-motion high-detail scene, and a high-motion scene.

As before with the preset test, these tests were performed with the exact same source video and preset, only the bitrate has changed.


I hope that this has helped you gain a better understanding of the basics on how video encoding works, the importance of bitrate, and the overall impact of changing these settings in OBS will have on your stream and performance of your PC. I will say again, the road to finding the perfect settings for your stream is to test, test, and test again. If you have any further questions, our forums and support chat are always open.

Happy creating!


Resource Links:

OBS Community Ideas and Suggestions – Fider Launched!

Ever wanted to make suggestions to the developers for cool new features and updates that you want to see in OBS Studio? How about check if other people have had the same idea as you? Now’s your chance! In addition to our newly launched Discord server, we’re excited to announce a brand new community feedback portal – Fider.

Your feedback has always been heard, even if we haven’t been able to respond to each individual request. Fider will allow us to be much more organized and transparent with our community. We can show the things we’re working on, features that are commonly requested, and even ideas that we know you want, but are either not possible to include in OBS or have been low priority. Fider will allow a much more public way for us to respond, and let you know that your voice has been heard.

Finally, we hope this gives potential developers and contributors a great place to look for the kinds of features that OBS needs, and where they can help out. We’ve recently updated our Getting Started with OBS Development guide, which is also linked on the Fider page.

We have pre-populated some of the more commonly requested items to give everyone an idea on the format and types of ideas we want to hear from you. See something you like? Give it a vote, and leave a comment showing your support. Don’t see your idea? Add it for everyone to see! We’re excited to hear what you all have to say.

Fider link: https://ideas.obsproject.com/

A maintainer’s guide on how to contribute to an open source project on GitHub

This guide is written by the maintainer of the OBS Project, a relatively large open source project which receives about 30-50 pull requests per month. This is meant to be a very concise and to-the-point guide on how to contribute to this (or any) open source project based upon my experience over the years; how to maximize both your contribution efficiency, and how to maximize the efficiency of the maintainers and your fellow contributors.

The Bare Basics

To contribute to a project, you must first have skill using both Git and the programming languages the project uses.

Know how to and how not to use Git

If you are not experienced using Git when you contribute, you will reduce the project’s contribution efficiency.

Examples of particularly vital Git skills:

  • Knowing how to use interactive rebase (git rebase -i [commit])
  • Knowing how to squash commits (See: “Knowing how to use interactive rebase” above)
  • If you plan on making larger or significant changes to the project, knowing how to split commits and use patch mode (git add -pgit reset -pgit checkout -p)
  • Using git diff frequently against your changes (especially your unstaged changes before you create a commit)
  • Knowing how to use GitHub/Bitbucket/etc

If you are just starting out with Git, please see: Git Guides and Other Links

Be able to work with others, and keep your ego in check

Most open source projects are driven by their community; being able to work with others and communication skill are essential requirements. Be willing to listen to others and be patient and do your utmost to prevent discussions from getting needlessly heated. Be willing to make compromises when possible.

How to contribute to open source projects efficiently

Read the project’s guidelines

All projects have a specific style of programming that they adhere to. If the project is written in C/C++, their code style may be Allman, K&R/KNF, Google, or GNU. If this is all new to you, it would be wise to learn a little bit about them. There are various styles of code with various indentation styles. If a project comes with a .clang-format file, make sure to utilize that file and run clang-format on your changes before staging. If they use a custom styling and do not use automatic formatting tools, try to look at the project history and emulate that as closely as possible before submission. This project for example uses Linux Kernel style of KNF (Kernel Normal Form).

Every project has their own way of doing things; sometimes they’re highly structured, organized, and rigid, and sometimes they’re lenient and scrappy. Keep in mind that there are also occasional imperfections, quirks, and inconsistencies in any project as well.

Read the project’s git and pull request history

One of the most often overlooked and quickest ways to understand how to make pull requests, changes, commits, or adhere to a project’s overall style, is to spend some time going through the project’s commit and pull request history. This will allow you to learn how they structure and style their code, their commits, and their commit messages. You can also see what sort of pull requests are typically accepted or rejected and why. Try to understand the project, the way they do things, and try to at least match that or better when contributing.

Making these efforts will save both you and the project a lot of time, and increase the project’s efficiency as a whole.

How to make a good commits

When a maintainer reviews your code, they typically don’t want to see a jumble of unrelated code in a single commit diff. It’s bad for bisecting, makes it hard to review, and more likely to be rejected until corrected. There are a number of good guidelines that typically apply well to almost any open source project:

  • Do not mix multiple changes in to a single commit: split each unrelated change, however small, in to an individual commit. This makes each specific change easier to review, and makes it so git bisect can catch bugs much more efficiently.
  • Make sure each commit can be fully compiled (and preferably functional) by itself. This ensures the ability to use bisect on that specific change.
  • Do not include unnecessary changes to existing code, such as code styling changes or whitespace changes.
  • When submitting a pull request, do not have commits that “fix” a mistake in a prior commit within the pull request unless you are intending to squash it in to that commit later (for example, if the pull request is considered a work in progress). Squash the fix commit in to the commit that it fixes.
  • Use git diff frequently on your unstaged changes. Ask yourself if this something that would be easy to read, understand, and review by itself. If not, you may need to split the code in to multiple commits.
  • Make sure the commit messages are clear and concise: treat the commit message as a brief description and annotation to the code being submitted. The best bet is to follow the 50/72 rule:
    • First line 50 columns or less (with the exception of the module prefix in our project), present tense, with no ending punctuation.
    • Second line blank.
    • Remaining lines are a detailed yet concise description, word-wrapped at 72 columns.
    Here is a recent example:libobs: Add functions to get raw video output Adds obs_add_raw_video_callback() and obs_remove_raw_video_callback() functions which allow the ability to get raw video frames without necessarily needing to create an output.

How to make good pull requests

Similarly, when a maintainer reviews a pull request, they would prefer not to have multiple commits that are unrelated. Ideally, a pull request should be one total overall modification to the project, not multiple.

  • A pull request should be a set of commits that are all related. If you have multiple changes that are completely unrelated, separate them in to different pull requests. This makes things easier to review, more organized, and more likely to be accepted.
  • If you want to make a pull request that depends on another pull request, it’s recommended (though not necessarily a rule) to hold off on submitting it until the first one has accepted and merged.
  • Pull request messages should contain a detailed description of what it changes and why it’s beneficial to the project.
  • If your pull request is a work in progress or has an issue, please make sure to note that.
  • Do not create pull requests from your fork’s master branch. Create a new branch to use for the pull request.
  • On GitHub, a pull request is simply a reference to a branch on your fork. This means you do not have to remake your pull request to update it. Just push to the branch and the pull request will be automatically updated. If you rebased and/or squashed, force push to the branch and the pull request will be automatically updated similarly.
  • Do not have merge commits within your pull request. See Git Guides and Other Links for more information on how to use Git first.

Get involved with the community

Each project has its own way of managing its community: IRC, mailing lists, forums, Discord, or all of the above. Get involved with the community and get to know the people around it. Contributors should know who the maintainers of the project are. Most projects will have one maintainer, although a few may have more than one, and rarely (such as in the case of the Linux kernel), there will be a hierarchy of maintainers for different subsystems of the project.

Each community is different; some communities are very formal, some are very informal, and then some are somewhere in-between. Some are very friendly, and some are unfortunately not so friendly. Because of this, I highly recommend spending some time getting involved with the community and seeing whether or not you feel comfortable with them.

When not to contribute

Every single new line of code has a maintenance cost.

Due to the fact that open source projects typically have limited resources, the code and features of an open source project usually need to be kept as minimal and focused as possible. Contribute changes that benefit as many people as possible and align with the project’s goals.

If you have a feature or change like this that benefits you and people like you specifically, but doesn’t necessarily benefit the broader base of users, you should consult the project’s maintainers and community on whether or not the change is something they want.

While it’s easy to say “my change doesn’t hurt anyone”, an open source project’s maintainers usually want to keep things as slim and focused as possible to minimize maintenance costs and technical debt.

A bunch of small and/or rarely-used features can add up over time; it makes the code progressively more complicated, less readable, makes the project more of a burden to maintain, and reduces the project’s overall efficiency little by little.

If your feature does not benefit many users, it may be rejected; however seemingly insignificant of a change. Your needs may simply not match the project’s overarching goals and/or the project’s focus. The maintainers typically do not mean any offense.

Git Guides and Other Links

OBS Studio Progress Report, August 2018

Welcome to the first OBS Studio Progress Report. My name is Jim, the normally-silent author of OBS. Version 22.0 has finally come out, and I had a really great time writing it.

This is going to be a long post because I almost never normally speak, so get ready.

First, I want to say thank you

This month marks the sixth anniversary of the very first release of OBS back in August 2012. Back during those times I was able to answer every single post on the forum, interacted with almost every single person who came around the chat, and answered every email. Some time around 2014-2015, forum posts, emails, and chat became so active that it would take me 10 hours per day to answer everything. Eventually, I had to stop, delegate that task to others, and focus exclusively on working on the program.

As of today, 21.1.2, the previous release before 22.0, had nine million downloads in three months time.

This blows my mind.

The program has (along with its derivatives) arguably become the most widely used tool for live streaming and recording on Twitch, YouTube, and around the world. I can’t stress enough how grateful I am for this opportunity. Before I made the program, I was in a pretty bad place. Now, I’ve had opportunities that I never thought I would ever have, and a unique, solid resume that will keep me fed for a long time. Hopefully, I can continue to do this as long as I can, because I am having a lot of fun doing it.

So again, to every one of you who use my program and find it useful: Thank you. You have changed my life; and I hope I have been able to change yours for the better as well. I know that many of you have become quite successful with the help of our humble tool, and I hope you continue to be successful.

Now, let’s do a quick run over the biggest developments that happened since 21.1.

The browser source was majorly refactored, and is now hardware accelerated

The browser source is arguably one of the most complicated and vital plugins of the project. Packaging Chrome itself to be usable as a source that you can add to OBS is about as complicated as it sounds. Fortunately, thanks to the great developers and contributors over at the CEF project, we’ve been able to integrate the power of the browser in to the project for use as a compositing tool. And with version 22.0, we’ve finally achieved hardware accelerated off-screen browser surfaces thanks to the wonderful developers and contributors of the CEF project.

Before, the way the browser source worked was one of two ways:

  • Chromium renders the browser surface with hardware rendering (GPU rendering) -> Chromium downloads the surface to RAM -> Surface is passed from Chromium to OBS with RAM -> We upload back to the GPU to be used as a source for OBS
  • Chromium renders the browser surface with software rendering (CPU rendering) -> Surface is passed from Chromium to OBS with RAM -> We upload to the GPU to be used as a source for OBS

Unfortunately, for the longest time, the former case took more resources than the latter case, and would also have strange hiccups in rendering, so we were forced to use software rendering for the browser source. However, software rendering also had its own issues, such as obscure crashes and the inability to use WebGL for advanced overlays. In either case, we were forced to always upload every single frame back on to the GPU for compositing. So we’ve always been stuck between a rock and a hard place for the browser.

Today, there are proposed changes on CEF which would allow passing of a shared surface from the Chromium renderer directly to the program implementing CEF. What this means is the pipeline now looks like this:

  • Chromium renders the browser surface with hardware rendering (GPU rendering) -> Chromium shares that surface with OBS to be used as a source without moving off of the GPU

The performance and resource benefits of this are astounding. You might initially think “but wait, isn’t it using more GPU now that it has to render on the GPU?”, but it turns out it’s actually the opposite case: before, when we had to upload the browser surface to the GPU every browser frame, that action of uploading frames alone required more GPU usage than actually just rendering the browser surface with the GPU.

To give some perspective on how much data had to be transferred to the GPU, let’s take the worst case scenario as an example: a 1920×1080 browser surface updating at 60 frames per second. A 1920×1080 RGBA frame is approximately eight megabytes, and transferring that to the GPU at 60 frames per seconds is almost 500 megabytes per second. What this means is not only do we reduce CPU and RAM usage due to the fact that we’re no longer rendering the surfaces on CPU and to RAM, but we also reduce GPU usage because we’re no longer transferring however many countless megabytes to VRAM!

This change to the browser source reduces its CPU usage, reduces its RAM usage, and reduces its GPU usage. This is one of the most significant optimizations the program has ever seen since game capture was created.

On top of all this, the browser source was always an incredibly complicated plugin. It had tens of thousands of lines by itself. With version 22.0, it’s now completely refactored, and over 13,000 lines of code were pruned (you can see the commit here). Any experienced programmer can probably sense the joy I have when I say those words. The act of pruning massive amounts of unnecessary code is arguably more valuable to the overall health of a program than is adding new features. All the memory leaks were fixed, the design has been greatly simplified, and the code is much more readable. It was a monumental undertaking that was well overdue, and I’m proud to say that the code is now in the best place it’s been in years.

It was absolutely worth it to finally tackle one of the biggest issues that we’ve had to face with the program.

Source Grouping

Source grouping has been a long-requested feature — the ability to treat multiple sources as one. We originally added the ability to use scenes as sources which were an interim workaround to this problem (which you can still use). However, scenes are always the same size as the canvas, and to edit the sources, you have to switch to those scenes to edit the elements. It has its upsides and downsides, but it’s always been a little bit awkward to use them that way.

With 22.0, we now have source grouping. I contemplated implementing this by just automatically combining the selection rectangles for multiple sub-items and always treating them as “one”, which would have been the most simple solution. However, if I implemented groups as resizable scenes internally, we’d be able to apply filters to the group and be able to more easily reference groups in other scenes. It was a bit more complicated, but I decided to go with the latter because that gave the users the most benefit and features.

I also wanted to make sure that you could see sub-items of groups in the list box, select/modify those sub-items, and expand and collapse that list, so the widget for the list of sources had to be completely rewritten.

What’s next for the program

Making the program easier, and improving user experience

If recent events have taught us anything, it’s that OBS isn’t particularly the friendliest towards new users. The auto-configuration tool (which users can use on the first time they start the program, or access in the tools menu) helped a lot with this; it was a really good first step. It can get streamers started with their encoding settings and getting the program outputting to stream, but what it doesn’t help them with is setting up other parts of their stream, like the captures, overlays, cameras, alerts, and all the things that are increasingly important to streamers these days.

One of the primary things being focused on for the next few versions will be making things easier for new users and getting people started more easily; being able to set up not just their encoding/video settings, but also to help them get started with overlays, captures, and other things as well.

User experience is also something that goes a long way: having a well designed user interface that has a smooth, intuitive flow and design. Sadly, the program isn’t the most ideal in this area either. There are features that some people don’t realize exist; some features which are hidden in menus. Or sometimes, users don’t know what a feature means or what it does, so they never use it. The number of times users ask us why they’re dropping frames is almost silly, and it’s something that should probably be explained better in the program. The status bar doesn’t convey information very well, the settings window could use a lot of work, the context menus have endless numbers of options — the list goes on and on.

These are all things that can be improved to make the experience of the program better. Reducing any sort of “clunky” feeling in the interface and making things feel better and easier to grasp is another one of the things being focused on.

Another nice thing that I’m hoping to add will be the ability to log on to Twitch/Youtube from within the program, which leads me in to my next segment:

Access to service APIs from within the program

One of the things that we should have done from the beginning is implement service APIs within the program: APIs for Twitch, YouTube, and perhaps even the APIs for streaming alert services. This would be useful not only because we would like to avoid having to make the user copy and paste their stream keys, but because it would allow the program to add a wide variety of other features, such as being able to see viewer count, channel followers, channel subscribers, alerts for whatever alert services they use, and all sorts of things that are becoming an increasingly necessary part of streaming. This would all be optional of course — if a user doesn’t want to use this functionality, they should always be allowed to use the program the way they have been and just enter in a stream key instead.

When these APIs are implemented, not only will this help make the program easier to use for first time streamers, but also provide great new features that users can take advantage of, and provide tools to companies to improve the user flow and better accommodate their users.

I had originally intended to have it be a part of the 22.0 release, but our settings window needed a ton of work to accommodate this, so I decided to just make the 22.0 release instead as-is.

On forks and plugins

There has been some interesting developments around OBS recently. One of which is a fork of our own project by a certain company; it has our core, but different frontend. I think users having options is always a good thing; even if it’s putting a lot of pressure on me to perform, and even if it is very stressful for me at times. It keeps the industry healthy and benefits the users the most because everyone is striving to improve and innovate.

However, despite its name, I want to be clear that this fork is not associated with myself or the OBS team. I want to state very clearly that I have not made any contractual agreements with any of the alert service companies. The only true requirement I have ever made is that people abide by the GPL.

I have never made nor will ever make any contractual agreement which would end up denying the users their freedom with the program. For me, the program is and always will be by the users, for the users. This is my personal ideal. Although there is a lot of zealotry around the GPL license at times, I fully believe in its intended ideals of freedom.

Fundamentally, if I am lacking something that the industry needs, then I want to strive to provide it to the industry. That is something I am focusing on; to make the program not only better for the users, but better for the industry, and help provide growth to the industry. Making the program easier to use for new users, improving user experience, and providing better tools for both users and companies in the industry.

However, I want to make sure that the users are always the ones who are in control.

Contribution, organizing, and moving forward

The program is quite a large project now. It now has a few hundred thousand lines of code, and approximately 30-50 pull requests per month. Meanwhile, I’m still adding improvements, fixes, and features of my own while managing all of this. At this point, organization and delegation is becoming increasingly necessary to accommodate everyone’s needs. I realize now that this isn’t something that I can do by myself, so there may be a time very soon when some sort of official organization needs to be made.

Fortunately, the wonderful contributors to the project have been adding requested features, improving the project, fixing bugs, making translations, tending the website, doing support on the forums and chat, and helping in so many ways that it just blows my mind.

I cannot emphasize enough how grateful I am to everyone who has been involved with the project; everyone who has contributed, everyone who has helped provide support, everyone who has helped manage the different aspects of the project, everyone who has donated, everyone who has reported a bug or feature request, and especially all of users who have used and enjoy our humble program.

This project has the best community anyone could have asked for.

I am by no means perfect, and not every decision I have made is perfect; but I will do what I can to make the program the best it can be.

If you would like to contribute to the project, please read my guide on how to contribute to open source projects.

Thank you for reading!

New Ways to Support OBS Development

It’s amazing to think that the first version of OBS was publicly released over six years ago. What started out as a small side project by Hugh “Jim” Bailey to make a free and open source program to stream StarCraft 2 has grown into a powerful force in the streaming and video production industry. Hundreds of thousands of people use OBS Studio every day not just for video gaming, but also for broadcasting everything from conferences to sports competitions to school announcements. It’s a tool that can be used freely by anyone, from large studios with big budget productions to individuals who just want to engage with a community online.

From the beginning, OBS has been a labor of love created by Jim and a group of volunteers dedicated to the ideal of free and open access to streaming and recording software. We’ve seen great growth in our developer and support volunteer community over the last several years, and it’s inspiring to see people spend their free time improving OBS and helping others use the program.

However, as OBS has grown, so too have the realities of running such a large open source project. We have many volunteers helping to develop, maintain, and manage the project, removing some of the work from Jim’s shoulders, but we want to make sure that those people have an incentive to continue helping with the project and avoid burnout. On top of that, we want to increase our ability to better handle the demands of the industry and community.

OBS will always be 100% free, and that is not something that will ever change. But it’s time that we take some steps to improve the project’s sustainability, and that means that we need to find ways to be able to pay our volunteers and compensate development expenses. To that end, we are announcing two new ways that you can help support OBS development financially: sponsorship via Open Collective, and backing via Patreon.

What is Open Collective?

Open Collective is a platform where a group of people can raise money for a shared purpose in an open and transparent way, even if the group may not have any formal organizational body. Open Collective uses a practice called fiscal sponsorship where a “host” organization provides facilities and services that allow for the group to accept payments not only from individuals, but also from companies in ways that companies understand. It’s a platform that has seen success already for several well-known open source projects, including WebpackBabel, and Vue.js.

Thus, we are launching a sponsorship program through Open Collective that makes it easy for companies and individuals to sponsor the OBS Project to help ensure that we can continue working on the program. Not only does sponsorship get your logo on our contributor page (and the OBS homepage at the Gold and Diamond levels), but it just makes good business sense, too. If you have a business that depends on OBS or is benefitted by OBS, then it’s in your interest to help ensure OBS can continue to be maintained and improved.

All funds given to OBS through Open Collective are used to support OBS development, and Open Collective makes this extremely transparent. All expenses are publicly viewable, so you’ll know when and how all funds are being spent. That way, you’ll be able to see directly how your contributions help pay for development costs, test hardware, software licensing, and more.

GamesDoneQuick Logo

We’re excited to announce that our first Gold sponsor on Open Collective is Games Done Quick, a charity fundraising organization whose events feature high-level gameplay by speedrunners from around the world. They stream live on Twitch, and use OBS as a critical component of their broadcasting stack, consistently stretching OBS to the limits of its capability. Thank you for your support!

Patreon

In addition to Open Collective, we are also launching a Patreon campaign to help fund OBS development. Whereas Open Collective is a bit more geared toward larger sponsors, Patreon is a great platform for individual users to give back to OBS.

This Patreon campaign especially helps support Jim as the project leader, maintainer, and only full time developer that the OBS Project has had since its inception. It’s impossible to overstate just how much OBS is a product of Jim himself — without him, there would be no OBS. Funds given to the Patreon are used to compensate Jim and invest into the OBS development community.

If you support OBS on Patreon, you could have the opportunity to gain the Patron role on the OBS Discord, an appearance in the program’s About dialog, and top patrons will be listed on the contributor page as well.

What is the difference? Which one should I give to?

If you’re an individual user of OBS, it probably makes the most sense to give on Patreon. You may already have an account on Patreon anyway if you support other creators, and you can get access to some nice perks, depending on how much you give. However, if you feel more comfortable seeing exactly how your contributions are being spent, then Open Collective is also a great way for you to support the project as well.

If you’re part of a business or organization that benefits from OBS, then you’ll probably feel most comfortable giving on Open Collective. Our host organization, the Open Source Collective, is a 501(c)(6) non-profit dedicated to helping open source projects like ours interface with companies like yours to make it easier to give back to the open source community. This includes automatic invoicing, handling purchase orders, reporting, and more.

If you want to contribute to OBS but can’t commit to a regular pledge, you can still make one-time contributions via the following methods:

  • One-time contributions on Open Collective
  • PayPal
  • Bitcoin: 112Y5HqUmE18yEgKdvbPHrng1ZaQ4Qd2DP
  • Bitcoin Cash: bitcoincash:qqvh6ck43a06vnkkgswrhlnc4xsxfn7rhgne07telp

Thank you

It has been an amazing privilege and pleasure to be a part of the OBS team. We have a fantastic community of developers and users, and we look forward to being able to continue doing this for years to come with your support.

OBS Studio Progress Report, February 2019

A new update is released and therefore a new progress report. The story of version 23 involves a whole lot of research and a whole lot of development.

Crowdfunding

Crowdfunding is something I now realize we should have done a long time ago. There’s no reason why we shouldn’t be pursuing this. As the project grows, and as more contributors come on board, I want to make sure that we can guarantee a future not just for myself but for the project and as many contributors as we possibly can.

After much discussion and looking at existing open source projects, we decided to create both a Patreon and Open Collective. Our goal is so we can ensure that not only can the project continue operating, but also have the ability to grow. Personally speaking, I want to ensure that not only can I work for the users, but that I can delegate important tasks to other contributors with experience working on core code and actually be able to pay them for doing so.

See the announcement blog post by Ben (dodgepong) for more details.

Development of browser-based widgets in 23.0

The goal of browser widgets was so that we could integrate services such as Twitch and allow the ability to display things like the user’s chat directly within the program. In 22.0, I sort of had CEF-based Qt widgets working, but they had a number of issues that hadn’t been resolved yet, so I put it aside for the time being and made the 22.0 release. However for 23.0, I wanted to get it finished, and get it right.

For those of you who aren’t programmers, CEF is the Chromium Embedded Framework, an awesome library we use to allow us to use Chrome within OBS for things like the browser source. It’s also what Spotify uses for their entire desktop app. Because of CEF, we were able to implement browser as UI with no extra resource cost than what OBS was already using when they had browser sources loaded.

High-DPI insanity

The first issue was DPI scaling support. The browser widget would not display correctly on monitors that had high-DPI scaling enabled. Eventually I thought I had thought found the fix: the browser subprocess needed to have high-DPI mode enabled as well as OBS. I made the commit eea74ff6, which I thought worked. However, another person on the OBS team discovered that when they had multiple monitors with different scaling, it would break when moving it across monitors! After what seemed like an entire week just investigating DPI issues and digging deep in to CEF/Chromium code, I discovered a special new API that Microsoft had added recently in Windows 10, which fixed it. The mystery was solved in commit 8521b2.

What an unprecedented amount of annoyance, and it was all just due to high-DPI scaling that the user can set on one or more of their monitors.

Cookies and Profiles

I implemented the ability to log in to your Twitch account, and eventually had Twitch chat working as a dockable panel as well. However, I soon realized that users may want different accounts across profiles; so I had to create a “cookie manager” specific to each profile, so that if a user switches from profile A with one account to profile B with another account, the chat windows/etc will appropriately go to that account’s channel. That required storing cookies for each profile in a discrete and separate location from each other. It was a pain, but CEF provided all the tools to make my plans happen.

BTTV/FFZ support for Twitch Chat

Naturally, when I created the chat, I quickly realized there was another problem: too many users use BTTV and/or FFZ extensions to add custom chat features, such as extra emojis, animated emojis, or other quality-of-life features that many people have come to love. It’s basically a must-have for Twitch chat these days, whether we like it or not. So, with the help of web pros on the OBS team, I implemented custom javascript that injected them into Twitch chat.

Popups and Other Unexpected Issues

When implementing service integration, I started with Twitch, which was the easiest service to implement. Mixer was similarly easy, but along the way, I discovered another annoyance and delay. When Mixer was acquired by Microsoft, they understandably started adding Microsoft features: one of which was logging in via your Microsoft account. However, when you log in with your Microsoft account on the OAuth login page, it creates a popup that starts from about:blankURL and is controlled via Mixer javascript to redirect it to the Microsoft login. I thought I had custom popup whitelisting working, but I then discovered I had to completely defer all whitelisted popups entirely to CEF and let it manage the popups itself, just to get that login working. It was a learning process.

Mac/Linux Browser Issues

Mac and Linux are another story entirely. I tried getting browser widgets working on macOS, but had a lot of crashes, so I had to put off Mac support for widgets for the 23.0 patch. I discovered what the issue was only a week ago, as well as the source behind many other crashes, but it still needs a lot of work before releasing. It’ll have to wait until 23.1 or a patch after because of that. So in the future, macOS will finally get a little loving and some stability fixes.

Linux has also been pretty neglected. I plan on getting the browser source finally working on Linux, as well as browser widgets.

Service Integration

Which leads to service integration: when going in to Settings or Auto-Configuration, users can now simply connect their account, log in to their service and just use it right then and there, without having to search around for their stream key. We were going to get YouTube integrated, but its API is quite a bit more complex, so we put it on hold for now to get out a solid release first. Other services are such as Facebook, Restream.io, and others will also be coming. We may also add an external API so plugins can add their own integration separately in the future.

Note that if for whatever reason you’d prefer to use a stream key, you still can. It’s completely optional for services that support stream keys.

NVENC Improvements

Originally, we used FFmpeg’s implementation of NVENC to save time. It was less than a few hundred lines to implement, and like x264, it only required the raw frames on system RAM. However, I knew that if I implemented it myself and revamped the backend to where we could just give encoders textures directly, it would improve performance. The reason we didn’t was behind the complexity of supporting Windows 7 as well. NVIDIA had contacted me asking me about it, and we talked back and forth on the matter. After discussions, I came up with a pretty simple plan: just forget Windows 7. If the user is on Windows 7, just simply fall back to the older version! It saved a lot of time, though not as much time as I’d hoped.

Multi-threading is very difficult to do right

I started off simple to get an initial implementation going: having the encoder the graphics thread (which is normally for rendering), but if either rendering or encoding lagged, it would cause a cascade of subsequent lag. My hope was that the encode call wouldn’t stall, but unfortunately it turned out that it can stall, so the only solution was to separate the encoding to another thread, like we already did with software encoders. I had to implement texture sharing in commit b64d7d7. This allowed the ability to not only share textures (like we did for game capture), but also lock textures between multiple threads and graphics contexts to ensure frame synchronization.

After a lot of trial and error, I finally came up with a good threaded implementation in the libobs backend, which I implemented in commit 93ba6e7. It operates on a circular texture queue buffer of a few textures, and I was able to make a specific optimization where if an encoder that uses RAM data is not simultaneously active (e.g. x264), then I can just swap the NV12 texture directly in to the queue instead of having to do an extra texture copy. Finally, after painfully laying all that groundwork for texture-based encoding support in the backend, it was time to finalize my new custom implementation of NVENC, which was accomplished in commit ed0c7bc.

So needless to say, I am very happy with how I was able to implement it as well as the optimizations I was able to come up with. It was pretty fun.

Performance Benefits

The performance benefits of the new NVENC are pretty significant. Before, the process looked like this:

  • OBS renders a frame
  • OBS transfers that texture from GPU to RAM like it would for any other encoder
  • FFmpeg NVENC uploads it to the GPU
  • FFmpeg NVENC encodes it

Now, it looks like this:

  • OBS renders a frame
  • NVENC encodes it

This is not just a performance improvement of OBS, but also reduces the impact of OBS on any game you’re playing while using NVENC. It’s a must-have for anyone streaming or recording games with a single PC setup.

Pull Requests and New Features by Contributors

Of course, by no means am I the only one working on this project. The OBS developer community continued to produce a ton of great improvements while I was in the trenches working on some of these larger issues, many of which were suggested by you on our ideas page. In fact, I started enlisting the help of fellow contributor DDRBoxman to help maintain the project so I can delegate pull requests and focus on code and project management.

Here are some of the things added by our contributors this patch:

  • DDRBoxman worked hard adding support for a long-standing feature request: Decklink output, including keyer support. This is an extremely useful feature for users who use OBS in higher-level productions.
  • pkv was certainly busy this patch. He added support for recording multiple audio tracks when using FFmpeg output, which has been a long-standing feature request. He also worked on adding several new audio enhancements, including an audio limiter filter to protect your audio from clipping (requested here) and an audio expander filter for smoother noise gating (requested here). He also added support for adding PSD files as image sources.
  • cg2121 continues to be one of our most active contributors. He made it possible to automatically remux recordings upon completion (requested here), so that you don’t have to remember to do it yourself if you don’t want to. He also implemented the ability to do stereo audio balancing in the advanced audio properties dialog, added the “About” dialog to show the license and list contributors, added an image limit to the slideshow source preventing users from using up too much memory if they need to use a lot of images, and finally, he added the ability to resize the OBS canvas to a specific source’s size (requested here), which makes it easier to record applications at exactly their native resolution.
  • nleseul added support for batch file remuxing, so that you can remux several files from FLV to MP4 all at once.
  • VAAPI support was finally merged in this patch, adding support for hardware encoding on Intel and AMD graphics cards on Linux. This was a long-time collaboration from several members of the OBS developer community, so huge thanks in particular to w23rebootkingargylekc5nraGloriousEggroll and kkartaltepe for helping make this feature a reality.
  • Andersama made the Stats window dockable, so that you can make it a more permanently visible part of your OBS UI (requested here).
  • Dillon has continued his work adding usability improvements to OBS, this time revamping the the way source selections and locations are shown in the OBS preview. This makes it easier to see which source is selected, which source you might select if you click on the preview, and what parts of sources are currently outside the preview. He also added the ability to filter the hotkeys list to make it easier to find the hotkey you are looking to set.

And keep in mind, there are many features still in the works by many contributors. So many great people work on what was just my humble little project, adding features they want, adding little improvements they like, improving performance, and submitting fixes left and right. It’s clear this project has become a project made by the people, for the people, and that’s exactly how I wanted it to be. I love this community so much, and I can’t wait to see what else we can accomplish in the future.

Hindsight, 23.1, 24.0, and beyond

Briefly, I want to share some of my plans for near and long-term future.

First of all I want to note that 23.0 wasn’t meant to take as long as it did. I originally wanted to get it out in late November 2018. However we added too many features that were way too big all at once, did too much R&D, ran in to too many unexpected hurdles that we had to leap over, and it ended up taking months to complete. Fortunately, I don’t see any major R&D of that level coming up any time soon again this year, so patch pace should greatly speed up once again.

We plan on releasing a 23.1 patch quickly after; the plan is to mostly just merge pull requests and make a quick minor release, hopefully no more than a few weeks. It should have a bunch of smaller new features and minor bug fixes, with at least one or two more services integrated.

My biggest focus near-term is to improve user experience, design, and first-time user onboarding. We need to make it easier for users to start streaming or recording for their first time and making it easier for them to understand and use OBS. The auto-configuration dialog was a big first step for that, but it’s clear there are still things we can do to improve the first-time experience. This year, especially for 24.0, expect to see not just more new features, but also improvements to user experience, onboarding, and design.

I also want to say that I plan to show some love for our recording-only users as well, such as the highly-requested pause recording feature. I also plan to spend some quality time on the macOS and Linux versions. Expect to see all those things in the near future.

For the long-term, there are a near endless number of features we want to implement, and a near endless number of features requested by the users. I myself have some plans I’ve wanted to see come to fruition for a long time, and finally it’s looking like I can make it happen. For requested features, we can’t always guarantee that we can get every feature right away, but if you have a feature you really want to see, please make sure to visit out ideas page, and upvote or submit a feature you’d like to see. Even if we can’t always implement it right away, it helps us gauge what the majority of you want to see, and helps us prioritize things.

Saying goodbye to a long-time contributor and friend

Michel “Osiris” Snippe, a long-time contributor to the project, passed away unexpectedly February 15th, 2019. He first joined the project near the very beginning; he was always active with helping users and contributing bug fixes, and when the original author of the browser plugin had to move on to other projects a few years ago, Osiris took it upon himself to take over maintaining the plugin for a year or two, during a period of time where I was unable to dedicate time to maintaining it myself due to its complexity. For the longest time, he managed builds for the browser plugin: fixing minor bugs where he could, adding minor features, merging pull requests, and he took care of the plugin best he was able until I was finally able to refactor the project and make it a core plugin in 2018.

He was always kind to people, and always active in the community. No more than a week before we found out we lost him, he was chatting away in our Discord server, helping users with support and goofing around with other contributors, mods, and admins. To lose a friend so suddenly hurts. It’s traumatizing. It makes me sad, and it makes me upset.

Thank you Michel, for helping us when we needed it, for being a part of our community, and for being a good friend. We’ll all miss you.

Thank you

I want to thank you all for using our humble project, and I want to thank the community for taking it to the next level and making their own features, as well as being patient with me. It started off as a tool I wanted to create for fun out of boredom, and it completely turned not just my life around, but many others as well. It created a wonderful community of friends. The fact that so many of you are becoming successful thanks to the help of our humble tool makes me so happy to see. Thank you so much for using OBS, and I hope we can continue to work on it as long as possible.

Also, thank you for reading this giant blog post. There was a ton of stuff to go over, and I didn’t even get to cover everything.

-Jim

Logitech Becomes Open Broadcaster Software’s First Diamond Sponsor on Open Collective

We are delighted to announce that Logitech has become the OBS Project’s first Diamond sponsor on Open Collective, demonstrating a huge commitment to OBS development.

More than 35 years ago, Logitech started connecting people through computers, and now it’s a multi-brand company designing products that bring people together through music, gaming, video, and digital content creation. In the broadcasting space, Logitech webcams solutions combined with Blue Microphones and Logitech G products have helped consumers share their passion, connect and engage with their community and create a unique identity. In fact, with the Logitech Capture app, users can save and restore their favorite settings for streaming through OBS. Thus, sponsoring OBS development only makes sense for a company that produces tools so widely enjoyed by streamers & broadcasters.

“We are excited to become the first Diamond sponsor on Open Collective for OBS. Over the years, OBS has become the most widely adopted open source software suite that is used for recording and live streaming. Logitech is committed to delivering solutions that simplify broadcasting and streaming, and teaming with OBS is a natural way to demonstrate our commitment. We look forward to partnering OBS to create a seamless streaming experience for users,” said Guillaume Bourelly, Senior Portfolio & Product Manager at Logitech.

Open Collective is a tool that allows open source projects to raise funds from companies and individuals through fiscal sponsorship. These funds can then be distributed to members of the developer community in an open and transparent way. Since OBS is a free program, sponsorship allows the creators of OBS to continue to develop, support, and improve the program in a sustainable way. To learn more about OBS sponsorship on Open Collective, and to see a full list of all of our sponsors, click here.