Scott Hanselman

Tips and Tricks and Best Practices for Hybrid Meetings with Microsoft Teams and more

July 22, 2021 Comment on this post [3] Posted in Musings
Sponsored By

I've had a number of emails asking questions like

I'm sure you have a ton of tips and learnings on how to create inclusive meetings where some people are remote and some not. Do you happen to have it in written somewhere?

We are discussing what guidance and technology we could use for the teams when coming back to a hybrid world, where meetings will surely have people connected remotely. For example, we were wondering how we can take some things from remote meetings like the chat window – which actually makes so much easier for everybody to participate – to this hybrid world (maybe projecting in the room, maybe assigning somebody to voice comments, etc.). Other areas we are discussing: how to deal with whiteboarding, how to avoid communication not flowing for remote people, recording meetings for people in different time zones…

and while I've written things like

imageI haven't written anything on Hybrid meetings where some folks are remote and others are starting to go back into the office. Fortunately, Mads Torgersen on the team is slowly making his way back into the office and has offered me these words to share with you, Dear Reader! I've paraphrased and edited this some as well. Thanks, Mads!

Mads: Last week I held a hybrid meeting! Which means that I was in the conference room with other people (ok, one other person), and the rest participated remotely via teams. The explicit purpose of the setup was to start gaining experience and learning the tricks for when there are folks back in the office on a more regular basis in phase 6.

This is to share my initial experiences, and encourage any conversation or tips other people have picked up. Feel free to share. There is no formal follow-up, and I know there are conversations around this going on in multiple places; it just feels to me like [a good place to start a] conversation at this point.

The conference room had the usual equipment of a projector and a room camera, ambient audio and a control panel in the middle of the table running a Teams client.

Scott: While we are using Teams at work, much of these tips can be used with Zoom and other video conferencing software.

First do no harm: Mads: The most important goal is to never go back to remote participation being a second-class experience! The remote experience in Teams needs to not deteriorate even one little bit when a conference room joins in. This means that everyone in the room should also be joined to the Teams meeting. Bring a laptop or other Teams-enabled device, turn off audio input and output on it (the room will take care of that) and use the Teams features as you would as a remote participant: Raise hands (Best. Feature. Ever!), participate in chat, send reactions.

Scott: If you're using Zoom or don't have a TV or room system, you can have everyone with laptops in the room join the meeting so their faces are shared, then have just one central person have their mic and speakers on. The idea is to allow the folks who are remote to see the context of what's happening and react to facial expresses as if they were in the room!

Create the space: Mads: At the same time, once several participants are coming to the office again, I think we should be careful not to create a force away from the office, making people stay at home just so they can go to meetings. If you don’t include a room in your meeting, you are compelling people to disturb their team room mates, scramble for sparse focus rooms or give up on coming in. The meeting room isn’t just a nice way to get together (though that is nice!), it is simply the most efficient, realistic and best way for on-site folks to participate in a meeting. So: come phase 6, start adding those meeting rooms again!

Scott: This suggestion won't apply to every company, as not every Enterprise has the idea of 'inviting a room.' This is a good tip though if you have a physical shared space back in the office AND that room can be invited so that you're not joining Teams/Zoom on laptops but with the Poly/TV or shared devices in the office room.

Placement in the room: The meeting leader (or in-room designate) needs to sit next to the [main central] Teams panel, so as to use it actively during the meeting (see below). We experimented with where to face. There’s a conflict between looking at your screen and looking at the projected output, but there’s also an efficiency in being able to have those two screens show different things. Also, it’s distracting for remote participants to see in-the-room folks "from the side” on either the room feed or the individual cameras.

We therefore landed on turning our laptops so we would face them in the same direction as the big screen and room camera. That way folks always see you from the front, you don’t have to turn your head between the shared and private screens. An odd downside (especially when more people are in the room) is that folks physically together don’t face each other! I’m still curious to see how this plays out with half-and-half or even majority in-room participants. But don’t forget to do no harm: Remote folks should not feel as if local folks are huddled in a circle and they are standing outside looking at people’s backs. Teams is the primary meeting venue and the physical room is secondary.

A possible other downside to being turned somewhat sideways is ergonomic. This is the same as when someone is giving a presentation and you’re not optimally seated. The emerging social contract here should come with enough wiggle room for folks to be physically comfortable through long-haul meetings.

Scott: What's important here isn't the implied prescription of what directions to face, but that Mads is making a conscious effort to be actively inclusive. He's trying new things and mixing up camera angles so that folks who are remote are present and included in the meeting.

Leading the meeting: Mads: Many of us have several screens at home, and it’s useful to keep track of all the moving parts across a lot of screen real estate. Having just your laptop can be quite limiting, but the Teams client [Scott: or shared TV] in the room can help a lot. First of all, if the room is not invited to your meeting (maybe you have the room invite separate like I do), its easy to call the room from the Teams meeting on your laptop, then "pick up” on the panel (or have someone in the room do it if you’re remote). From then on, the room is "in" the meeting.

The panel lets you pick different screen layouts for what is projected, and you can use that to differentiate between what’s on the shared and private screens, clawing back real estate. What worked well for us was to project just the faces ("Gallery Mode”) on the big screen; when something was being shared you could read it better on your private screen anyway, and having remote folks’ faces bigger on the wall made for a much better sense of "connection” and reminder of their presence in the meeting. If you’re leading the meeting remotely, have someone in the room be the designated panel operator.

The panel also shows the participant list in hands-raised order like your own Teams client does, and that frees up even more real estate for the meeting leader, if you’re in the room.

Finally the panel has a spare "raise hand” button for the room, so if you end up with one or two in-room folks who for some reason can’t participate on Teams (maybe they don’t have a laptop), you can have them sit nearby and let them use that to raise their hand during the meeting.

All in all this was a much better experience than I expected. I felt I had the tools I needed to run a good meeting for everyone involved, keeping the experience as good for remote folks, and making it pretty decent for those in the room. As more people get in, a lot is going to ride on good habits, so that remote people continue to be fully included and empowered.

I hope that was useful! Any thoughts, additional or countervailing experiences etc, I’d love to hear them! Together we’re gonna nail this hybrid thing!

Scott: What are your best tips and tricks for good hybrid meetings?


Sponsor: Pluralsight helps teams build better tech skills through expert-led, hands-on practice and clear development paths. For a limited time, get 50% off your first month and start building stronger skills.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

How to install .NET Core on your Remarkable 2 e-Ink tablet with Remarkable.NET

July 20, 2021 Comment on this post [4] Posted in Open Source
Sponsored By

I blogged about The quiet rise of E Ink Tablets and Infinite Paper Note Takers - reMarkable 2 vs Onyx Boox Note Air and my love for the Remarkable 2 e-Ink tablet.

Now I see that Colby Newman is working on a .NET API for the Remarkable series of tablets. As you know, Dear Reader, I will install .NET on anything and everything so this is right up my alley. The NuGet Package is Remarkable.NET and the GitHub is at https://github.com/parzivail/ReMarkable.NET

.NET Core is open source and cross platform and Remarkable.NET is build on .NET Core 3.1 which has binaries for ARM32 which sets us up nicely for use on the Remarkable tablet.

I can download the build here https://dotnet.microsoft.com/download/dotnet/3.1 

On my Remarkable I can go to Settings | Help | Copyright and Licenses to see my IP address and SSH root password.

Now I can scp and ssh root@192.168.1.71 and enter the password. Note that ssh and OpenSSH is built into Windows 10 so you don't need to install putty or WinSCP but feel free if it makes you happy. Just a reminder though that Windows has OpenSSH or you can use WSL on Windows 10.

After downloading .NET Core 3.1 to my local machine I use scp to copy to /home/root then I ssh into the Remarkable Tablet. Of course, your IP will be different.

scp .\dotnet-sdk-3.1.411-linux-arm.tar.gz root@192.168.1.71:/home/root
ssh root@192.168.1.71

Sweet.

image

Now, per their docs, from my ssh session on the Remarkable, I unzip dotnet and mark it as executable.

mkdir dotnet
tar xzf dotnet-sdk-3.1.411-linux-arm.tar.gz -C ./dotnet
chmod +x ./dotnet/dotnet

Fantastic. This is an amazing project. There's even an emulator that uses OpenTK to draw, so I can test locally.

The docs sample code is very idiomatic .NET. Nice stuff. For example:

// Create an image
var img = new Image<Rgb24>(300, 300);

// Do some image processing
img.Mutate(ctx => ctx.DrawLines(Color.Black, 3, new PointF(50, 50), new PointF(250, 250)));

// Draw the image to the screen
OutputDevices.Display.Draw(img, img.Bounds(), Point.Empty);

and

// Exit when home button is pressed
InputDevices.PhysicalButtons.Pressed += (sender, button) =>
{
if (button == PhysicalButton.Home)
CloseApp();
};

This is very easy to read.

If your are ssh'ed into your Remarkable 2, you can start|stop the main Remarkable interface with

systemctl start|stop xochitl

You may want to do this when running apps ssh'ed in. There's also a community repository of free software for the Remarkable called Toltec. It's like homebrew or winget or apt get for Remarkable. And the dotnet runtime is already in the Toltec listing, which is cool.

Thanks to TheRealShodan on Twitter for their help with this thread. They said:

I’m not going to lie the UX for new people isn’t perfect. Basically a new version dropped whilst the Toltec devs were mid re-architecture. Step one backup your root password and drop on a key!

2. Install Toltec but use the testing repo. Stable basically doesn’t work - I have already queried the nature of ‘stable’

3. Install the display package. This include r2fb which is a lib with hooks that make rm1 apps work on rm2. The community has kinda standardised on using the rm1 display API.

4. Install the display package. This include r2fb which is a lib with hooks that make rm1 apps work on rm2. The community has kinda standardised on using the rm1 display API. (Ed. Note. Nice to know .NET 5 works also!)

5. Build the remarkable.net sandbox binary as Release/ARM. Copy to device and run with rm2fb-client dotnet Sandbox.dll

6. At this point you will see that it doesn’t work as I haven’t put my PR in. You can try to fix by changing HardwareTouchscreenDriver construction in the lib

7. To see anything on screen you’ll have to go through my ramblings on discord. Basically find the DllImport references to libc and change this to point to /opt/lib/librm2fb_client.so.1.0.1

Also, to run your app shutdown xochitl otherwise they’ll both fight over the display and make a mess. Ideally you would use a launcher to manage that but as I’m just debugging I run from CLI. Don’t disable xochitl, that way a reboot will fix anything bad.

I'm still exploring but I'm enjoying the ride! (as always, no warranty express or implied!)

NOTE: If you mess up your remarkable messing around with Toltec or think you Bricked it (again, don't complain to me, please) you can connect it with USB and ssh root@10.11.99.1 locally and once you are there, there is a great thread here on how to uninstall Toltec.

Have fun, be safe.


Sponsor: Pluralsight helps teams build better tech skills through expert-led, hands-on practice and clear development paths. For a limited time, get 50% off your first month and start building stronger skills.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

How to turn on Memory Integrity and Core Isolation in Windows 10

July 06, 2021 Comment on this post [11] Posted in Win10 | Win11
Sponsored By

According to the Microsoft Support website:

"Core isolation is a security feature of Microsoft Windows that protects important core processes of Windows from malicious software by isolating them in memory. It does this by running those core processes in a virtualized environment.

Memory integrity is one feature of core isolation which regularly verifies the integrity of the code running those core processes in an attempt to prevent any attacks from altering them.

We recommend that you leave this setting on, if your system supports it."

Cool. Before we start

MASSIVE WARNING

Be aware:

Do be conscious of each driver and what it does and consider what functionality - if any - you'll be losing if you remove them. If this blog post or specifically, you following the directions of this blog post, renders your machine unusable or unbootable, I'm sorry but you gotta do your research and back up your system. You should be able to turn it off and reinstall, but still, be careful.

Ok, ready? Feeling technically confident and have backups? Now continue.

Turns out this was added way back in 2017 in Windows 10 build 17093. In fact, Hypervisor-Protected Code Integrity (HVCI) has been around since the dawn of Windows 10 itself!

I ran the Windows Security app on my system and noticed a few things. First, at the bottom it says "Your device meets the requirements for standard hardware security" but this can read "...for enhanced hardware security."

In order to be considered enhanced, your system needs to support:

  • TPM 2.0
  • Secure boot
  • DEP - Data Execution Prevention
  • UEFI MAT - Unified Extensible Firmware Interface Memory Memory Attributes Table

Some of these technologies are quite old and have been in Windows for a while. It's the collection of all them together, working as a team, that enhances your systems security. Virtualization-based Security (VBS) isolates a secure region of memory from the rest of the OS.

I started digging to understand what was interesting or unique about my system that was preventing me from turning these new features on. Additionally I wanted to make sure I was ready for Windows 11 whenever it arrives and adds more security features and requirements.

Go to the Windows Security app and click Device Security.

Windows Security

I clicked on Core Isolation to turn on VBS and noticed that the on/off switch was grayed out and I could scan for driver incompatibilities. I want to ensure that drivers I have loaded into the kernel are secure. Windows 10 has a feature where drivers can use HVCI but those drivers need to be written in certain ways to ensure they have a clear separation between data and code, and can't load data files as executable, or use dynamic code in the kernel. Again, NONE of this is new and goes back as far as 2015 or earlier.

Core Isolation

What do I have installed? Well, friends, a ton of crap, it turns out! LOL. All off these drivers are either super old or are using insecure coding techniques that are preventing my system from turning on the Core Isolation Memory Integrity feature.

Incompatible Drivers

I can start searching for each of these and I see a few interesting culprits. Remember, these are all either old or poorly written drivers that are loaded into the kernel on my desktop machine, chillin'.

That Western Digital one? Notice that it evens says "_prewin8.sys" so I hope someone from WDC reads this blog and feels just a little bit bad about it. This is from an external USB hard drive. I certainly don't need whatever extra feature that driver lights up. My USB Hard drive is just fine without it.

The STT*.sys and S3x*.sys drivers are all from various Arduino COM Port utilities and DFU-util firmware flashers. Remember those unsigned warnings you thought nothing of years ago? Well, those drivers are still with you...I mean, me.

Bad drivers and Incompatible Drivers

It's easy to look for "Windows Driver Package" and line up some of these drivers with actual installers and remove from Add/Remove Programs.

However, since I do a lot of IoT stuff and install random INFs manually...many of these drivers won't show up in ARP (Add/Remove Programs).

I could use Autoruns.exe and click the Drivers tab, but not every one shows up there, and even if you uncheck a driver here it won't be removed from the Windows Security Scan. It needs to be uninstalled and deleted.

Autoruns

For visible drivers, I can open Device Manager and look at the Driver details for each one.

Device Manager

If the .sys file matches, I can right click uninstall and check the delete checkbox to remove the driver entirely.

NDI NewTek WDM Kernel Streaming Driver

This NDI Webcam Input (NDI Virtual Input) driver knowledge base literally tells you to turn off Secure Boot and turn off Memory Integrity to install their unsigned driver. No thanks.

NDI Virtual Cam Digitally Signed Driver Error

From an admin command line you can get a list of drivers. This one gets a list in PowerShell and puts it in your clipboard.

get-windowsdriver -online | clip.exe

While this one works anywhere and gets a simple list:

wmic sysdriver get name 

TL;DR - Find the oem.inf from the Incompatible Drivers list and remove it at the Command Line.

But when you have the list from the Incompatible Drivers scan as seen in the screenshot above, just click each driver and you'll see the "oemXX.inf" file that describes the driver. Note your numbers will vary.

pnputil /delete-driver <example.inf> /uninstall

Then you can use pnputil that comes with Windows to delete the driver package from your system's driver store. Here is me doing that:

pnputil /delete-driver

Do be conscious of each driver and what it does and consider what functionality - if any - you'll be losing if you remove them. If this blog post or specifically, you following the directions of this blog post, renders your machine unusable or unbootable, I'm sorry but you gotta do your research and back up your system. You should be able to turn it off and reinstall, but still, be careful.

If you're removing a Graphics Driver or something that looks or feels essential you'd be better off finding an updated version of that driver than just removing it.

Now I'm all set:

Core Isolation

And my system says "meets the requirements for enhanced hardware security." Sweet.

image

Hope this helps you and sets you up for future success. I did a LOT of searching to figure this out and spent many hours to break this down for y'all.


Sponsor: This week's sponsor is...me! This blog and my podcast has been a labor of love for over 18 years. Your sponsorship pays my hosting bills for both AND allows me to buy gadgets to review AND the occasional taco. Join me!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

Adding Predictive IntelliSense to my Windows Terminal PowerShell Prompt with PSReadline

July 01, 2021 Comment on this post [7] Posted in PowerShell
Sponsored By

I've long said You should be customizing your PowerShell Prompt with PSReadLine. Go to your PowerShell prompt, and

Install-Module PSReadLine -AllowPrerelease -Force

Then, after running code $profile or nodepad $profile, add

Import-Module PSReadLine

Sure, but next, add these:

Set-PSReadLineOption -PredictionSource History
Set-PSReadLineOption -PredictionViewStyle ListView
Set-PSReadLineOption -EditMode Windows

This means that PSReadLine (and hence, your prompt in general) will use your prompt history to make predictions on what you want to see next. These predictions can be on one line in light gray (full details on Jason's blog) but I like them to pop down in a ANSI style ListView. Then you can edit them with up and down arrows (or Emacs or VI soon).

I'm loving PSReadLine an will be doing a video on setting up your best prompt soon.


Sponsor: Pluralsight helps teams build better tech skills through expert-led, hands-on practice and clear development paths. For a limited time, get 50% off your first month and start building stronger skills.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

C sharp or B flat? Experiments in self-contained native executables in .NET

June 29, 2021 Comment on this post [4] Posted in Open Source
Sponsored By

One of the best parts of the .NET ecosystem is the excitement around experimentation. Someone is always taking .NET to the next level, trying new things, pushing the envelope.

Michal Strehovsky has an interesting experiment on his GitHub called "bflat." This is not a product, it's a playground.

bflat is a concoction of Roslyn - the "official" C# compiler that produces .NET executables - and NativeAOT (née CoreRT) - the experimental ahead of time compiler for .NET based on CoreCLR's crossgen2. Thanks to this, you get access to the latest C# features using the high performance CoreCLR GC and native code generator (RyuJIT).

bflat merges the two components together into a single ahead of time crosscompiler and runtime for C#.

I find this characterization funny:

bflat is to dotnet as VS Code is to VS.

Michal is basically stripping .NET down to the bare minimum and combining the official compiler and and the experimental AOT (Ahead of Time) compiler to make single small EXEs that are totally self-contained.

Michal says you can get involved if you like!

If you think bflat is useful, you can leave me a tip in my tip jar and include your GitHub user name in a note so that I can give you access to a private repo when I'm ready.

Hello World today is about 2 megs. He says it's because:

By default, bflat produces executables that are between 2 MB and 3 MB in size, even for the simplest apps. There are multiple reasons for this:

  • bflat includes stack trace data about all compiled methods so that it can print pretty exception stack traces
  • even the simplest apps might end up calling into reflection (to e.g. get the name of the OutOfMemoryException class), globalization, etc.
  • method bodies are aligned at 16-byte boundaries to optimize CPU cache line utilization
  • (Doesn't apply to Windows) DWARF debug information is included in the executable

So when I ran bflat build, here was my output.

2.8 meg hello world

But when I run

bflat.exe build --no-reflection --no-stacktrace-data --no-globalization --no-exception-messages .\hello.cs

I end up with a 750kb file!

750kb Hello World

Sure, it's not C code because it'll never be C code. You get access to a LOT MORE with C#.

This could be a useful system for creating tiny apps in C# for Linux or Windows command line administration. It also showcases how the open pieces of .NET can be plugged together differently to achieve interesting results.

I'm sure there's lot of AOT limitations around Reflection, Attributes, and more, but this is still a very cool experiment, go check it out at https://github.com/MichalStrehovsky/bflat!


Sponsor: Pluralsight helps teams build better tech skills through expert-led, hands-on practice and clear development paths. For a limited time, get 50% off your first month and start building stronger skills.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.