Binary Options Programmers - Blogger

Some Background and Thoughts on FPGAs

I have been lurking on this board for a few years. I decided the other day to finally create an account so I could come out of lurk mode. As you might guess from my id I was able to retire at the beginning of this year on a significantly accelerated timetable thanks to the 20x return from my AMD stock and option investments since 2016.
I spent my career working on electronics and software for the satellite industry. We made heavy use of FPGAs and more often than not Xilinx FPGAs since they had a radiation tolerant line. I thought I would summarize some of the ways they were used in and around the development process. My experience is going to be very different than the datacenter settings in the last few years. The AI and big data stuff was a pipe dream back then.
In the olden times of the 90s we used CPUs which unlike modern processors did not include much in the way of I/O and memory controller. The computer board designs graduated from CPU + a bunch of ICs (much like the original IBM PC design) to a CPU + Xilinx FPGA + RAM + ROM and maybe a 5V or 3.3V linear voltage regulator. Those old FPGAs were programmed before they were soldered to the PCB using a dedicated programming unit attached to a PC. Pretty much the same way ROMs were programmed. At the time FPGAs gate capacity was small enough that it was still feasible to design their implementation using schematics. An engineer would draw up logic gates and flip-flops just like you would if using discrete logic ICs and then compile it to the FPGA binary and burn it to the FPGA using a programmer box like a ROM. If you screwed it up you had to buy another FPGA chip, they were not erasable. The advantage of using the FPGA is that it was common to implement a custom I/O protocol to talk to other FPGAs, on other boards, which might be operating A/D and D/A converters and digital I/O driver chips. As the FPGA gate capacities increased the overall board count could be decreased.
With the advent of much larger FPGAs that were in-circuit re-programmable they began to be used for prototyping ASIC designs. One project I worked on was developing a radiation hardened PowerPC processor ASIC with specialized I/O. A Xilinx FPGA was used to test the implementation at approximately half-speed. The PowerPC core was licensed IP and surrounded with bits that were developed in VHDL. In the satellite industry the volumes are typically not high enough to warrant developing ASICs but they could be fabbed on a rad-hard process while the time large capacity re-programmable FPGAs were not. Using FPGAs for prototyping the ASIC was essential because you only had one chance to get the ASIC right, it was cost and schedule prohibitive to do any respins.
Another way re-programmable FPGAs were used was for test equipment and ground stations. The flight hardware had these custom designed ASICs of all sorts which generally created data streams that would transmitted down from space. It was advantageous to test the boards without the full set of downlink and receiver hardware so a commercial FPGA board in a PC would be used to hook into the data bus in place of the radio. Similarly other test equipment would be made which emulated the data stream from the flight hardware so that the radio hardware could be tested independently. Finally the ground stations would often use FPGAs to pull in the digital data stream from the receiver radio and process the data in real-time. These FPGAs were typically programmed using VHDL but as tools progressed it became possible to program to program the entire PC + FPGA board combination using LabView or Simulink which also handled the UI. In the 2000s it was even possible to program a real-time software defined radio using these tools.
As FPGAs progressed they became much more sophisticated. Instead of only having to specify whether an I/O pin was digital input or output you could choose between high speed, low speed, serdes, analog etc. Instead of having to interface to external RAM chips they began to include banks of internal RAM. That is because FPGAs were no longer just gate arrays but included a quantity of "hard-core" functionality. The natural progression of FPGAs with hard cores brings them into direct competition with embedded processor SOCs. At the same time embedded SOCs have gained flexibility with I/O pin assignment which is very similar to what FPGAs allow.
It is important to understand that in the modern era of chip design the difference between the teams that AMD and Xilinx has for chip design is primarily at the architecture level. Low level design and validation are going to largely be the same (although they may be using different tools and best practices). There are going to be some synergies in process and there is going to be some flexibility in having more teams capable of bringing chips to market. They are going to be able to commingle the best practices between the two which is going to be a net boost to productivity for one side or the other or both. Furthermore AMD will have access to Xilinx FPGAs for design validation at cost and perhaps ahead of release and Xilinx will be able to leverage AMD's internal server clouds. The companies will also have access to a greater number of Fellow level architects and process gurus. Also AMD has internally developed IP blocks that Xilinx could leverage and vice versa. Going forward there would be savings on externally licensed IP blocks as well.
AI is all the rage these days but there are many other applications for generic FPGAs and for including field programmable gates in sophisticated SOCs. As the grand convergence continues I would not be surprised at all to see FPGA as much a key component to future chips as graphics are in an APU. If Moore’s law is slowing down then the ability to reconfigure the circuitry on the fly is a potential mitigation. At some point being able to reallocate the transistor budget on the fly is going to win out over adding more and more fixed functionality. Going a bit down the big.little path what if a core could be reconfigured on the fly to be integer heavy or 64 bit float heavy within the same transistor budget. Instead of dedicated video encodedecoders or AVX 512 that sits dark most of the time the OS can gin it up on demand. In a laptop or phone setting this could be a big improvement.
If anybody has questions I'd be happy to answer. I'm sure there are a number of other posters here with a background in electronics and chip design who can weigh in as well.
submitted by RetdThx2AMD to AMD_Stock [link] [comments]

Undefeated roulette tricks vs forex?

i'm new in this forex stuff (not even starting yet) & first time visiting Forex. But i've read that forex basically gambling (guessing either it goes up or down, and you got previous data as reference). I'm also read about foolproof gambling tricks that works in real life roulette. Basically it goes like this :
  1. bet $1 on red - if you win, repeat step 1.
  2. if you lose, bet $3. if you win, repeat step 1.
  3. if you lose again, bet $6. if you win, repeat step 1.
  4. if you lose again, bet $14. if you win repeat step 1.
  5. if you lose again, bet $31. if you win, repeat step 1
so, can this be apply on forex trading? (there's lot ads about forex trading apps, thinking to try it) can't profit big, but seem cant lose either. might be a good strategy. any thought?
edit 1 : what i mean in this forex is binary options, which some forex trading apps operates.
edit 2 : it takes 5 unlucky trading before $55 account blown off. is that really common to get 5 unlucky trading in a row?
edit 3 : here's the math (cnp from reply)
some forex apps (like expert option or olymp trade) operate on binary option (this is unregulated securities?) where usually they give 80% return on trade. the math goes like this :
  1. $1 trade and win = $0.80 profit
  2. lose then $3 trade and win = $2.4 - $1 (lose) = $1.4 profit
  3. lose then $6 trade and win = $4.8 - $4 (lose) = $0.8 profit
  4. lose then $14 trade and win = $11.2 - $10 (lose) = $1.2 profit
  5. lose then $31 trade and win = $24.8 - $24 (lose) = $0.8 profit
edit 4 : some reply said **binary options type forex trading apps** are scam & fraud. bummer. maybe trading via smartphone isnt easy as i thought.
edit 5 : still, add some ability to reading indicator & chart could help avoiding 5 unlucky trading in a row. damn, if i'm a programmer, i'll make a trading bots based on this idea xD

submitted by Nam3AlreadyTaken to Forex [link] [comments]

Red Hat OpenShift Container Platform Instruction Manual for Windows Powershell

Introduction to the manual
This manual is made to guide you step by step in setting up an OpenShift cloud environment on your own device. It will tell you what needs to be done, when it needs to be done, what you will be doing and why you will be doing it, all in one convenient manual that is made for Windows users. Although if you'd want to try it on Linux or MacOS we did add the commands necesary to get the CodeReady Containers to run on your operating system. Be warned however there are some system requirements that are necessary to run the CodeReady Containers that we will be using. These requirements are specified within chapter Minimum system requirements.
This manual is written for everyone with an interest in the Red Hat OpenShift Container Platform and has at least a basic understanding of the command line within PowerShell on Windows. Even though it is possible to use most of the manual for Linux or MacOS we will focus on how to do this within Windows.
If you follow this manual you will be able to do the following items by yourself:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying the Mediawiki application
What is the OpenShift Container platform?
Red Hat OpenShift is a cloud development Platform as a Service (PaaS). It enables developers to develop and deploy their applications on a cloud infrastructure. It is based on the Kubernetes platform and is widely used by developers and IT operations worldwide. The OpenShift Container platform makes use of CodeReady Containers. CodeReady Containers are pre-configured containers that can be used for developing and testing purposes. There are also CodeReady Workspaces, these workspaces are used to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.
The OpenShift Container Platform is widely used because it helps the programmers and developers make their application faster because of CodeReady Containers and CodeReady Workspaces and it also allows them to test their application in the same environment. One of the advantages provided by OpenShift is the efficient container orchestration. This allows for faster container provisioning, deploying and management. It does this by streamlining and automating the automation process.
What knowledge is required or recommended to proceed with the installation?
To be able to follow this manual some knowledge is mandatory, because most of the commands are done within the Command Line interface it is necessary to know how it works and how you can browse through files/folders. If you either don’t have this basic knowledge or have trouble with the basic Command Line Interface commands from PowerShell, then a cheat sheet might offer some help. We recommend the following cheat sheet for windows:
Https://www.sans.org/security-resources/sec560/windows\_command\_line\_sheet\_v1.pdf
Another option is to read through the operating system’s documentation or introduction guides. Though the documentation can be overwhelming by the sheer amount of commands.
Microsoft: https://docs.microsoft.com/en-us/windows-serveadministration/windows-commands/windows-commands
MacOS
Https://www.makeuseof.com/tag/mac-terminal-commands-cheat-sheet/
Linux
https://ubuntu.com/tutorials/command-line-for-beginners#2-a-brief-history-lesson https://www.guru99.com/linux-commands-cheat-sheet.html
http://cc.iiti.ac.in/docs/linuxcommands.pdf
Aside from the required knowledge there are also some things that can be helpful to know just to make the use of OpenShift a bit simpler. This consists of some general knowledge on PaaS like Dockers and Kubernetes.
Docker https://www.docker.com/
Kubernetes https://kubernetes.io/

System requirements

Minimum System requirements

The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum hardware:
Hardware requirements
Code Ready Containers requires the following system resources:
● 4 virtual CPU’s
● 9 GB of free random-access memory
● 35 GB of storage space
● Physical CPU with Hyper-V (intel) or SVM mode (AMD) this has to be enabled in the bios
Software requirements
The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum operating system requirements:
Microsoft Windows
On Microsoft Windows, the Red Hat OpenShift CodeReady Containers requires the Windows 10 Pro Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions or other editions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.
macOS
On macOS, the Red Hat OpenShift CodeReady Containers requires macOS 10.12 Sierra or newer.
Linux
On Linux, the Red Hat OpenShift CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer and on the latest two stable Fedora releases.
When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal.
Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.

Required additional software packages for Linux

The CodeReady Containers on Linux require the libvirt and Network Manager packages to run. Consult the following table to find the command used to install these packages for your Linux distribution:
Table 1.1 Package installation commands by distribution
Linux Distribution Installation command
Fedora Sudo dnf install NetworkManager
Red Hat Enterprise Linux/CentOS Su -c 'yum install NetworkManager'
Debian/Ubuntu Sudo apt install qemu-kvm libvirt-daemonlibvirt-daemon-system network-manage

Installation

Getting started with the installation

To install CodeReady Containers a few steps must be undertaken. Because an OpenShift account is necessary to use the application this will be the first step. An account can be made on “https://www.openshift.com/”, where you need to press login and after that select the option “Create one now”
After making an account the next step is to download the latest release of CodeReady Containers and the pulled secret on “https://cloud.redhat.com/openshift/install/crc/installer-provisioned”. Make sure to download the version corresponding to your platform and/or operating system. After downloading the right version, the contents have to be extracted from the archive to a location in your $PATH. The pulled secret should be saved because it is needed later.
The command line interface has to be opened before we can continue with the installation. For windows we will use PowerShell. All the commands we use during the installation procedure of this guide are going to be done in this command line interface unless stated otherwise. To be able to run the commands within the command line interface, use the command line interface to go to the location in your $PATH where you extracted the CodeReady zip.
If you have installed an outdated version and you wish to update, then you can delete the existing CodeReady Containers virtual machine with the $crc delete command. After deleting the container, you must replace the old crc binary with a newly downloaded binary of the latest release.
C:\Users\[username]\$PATH>crc delete 
When you have done the previous steps please confirm that the correct and up to date crc binary is in use by checking it with the $crc version command, this should provide you with the version that is currently installed.
C:\Users\[username]\$PATH>crc version 
To set up the host operating system for the CodeReady Containers virtual machine you have to run the $crc setup command. After running crc setup, crc start will create a minimal OpenShift 4 cluster in the folder where the executable is located.
C:\Users\[username]>crc setup 

Setting up CodeReady Containers

Now we need to set up the new CodeReady Containers release with the $crc setup command. This command will perform the operations necessary to run the CodeReady Containers and create the ~/.crc directory if it did not previously exist. In the process you have to supply your pulled secret, once this process is completed you have to reboot your system. When the system has restarted you can start the new CodeReady Containers virtual machine with the $crc start command. The $crc start command starts the CodeReady virtual machine and OpenShift cluster.
You cannot change the configuration of an existing CodeReady Containers virtual machine. So if you have a CodeReady Containers virtual machine and you want to make configuration changes you need to delete the virtual machine with the $crc delete command and create a new virtual machine and start that one with the configuration changes. Take note that deleting the virtual machine will also delete the data stored in the CodeReady Containers. So, to prevent data loss we recommend you save the data you wish to keep. Also keep in mind that it is not necessary to change the default configuration to start OpenShift.
C:\Users\[username]\$PATH>crc setup 
Before starting the machine, you need to keep in mind that it is not possible to make any changes to the virtual machine. For this tutorial however it is not necessary to change the configuration, if you don’t want to make any changes please continue by starting the machine with the crc start command.
C:\Users\[username]\$PATH>crc start 
\ it is possible that you will get a Nameserver error later on, if this is the case please start it with* crc start -n 1.1.1.1

Configuration

It is not is not necessary to change the default configuration and continue with this tutorial, this chapter is here for those that wish to do so and know what they are doing. However, for MacOS and Linux it is necessary to change the dns settings.

Configuring the CodeReady Containers

To start the configuration of the CodeReady Containers use the command crc config. This command allows you to configure the crc binary and the CodeReady virtual machine. The command has some requirements before it’s able to configure. This requirement is a subcommand, the available subcommands for this binary and virtual machine are:
get, this command allows you to see the values of a configurable property
set/unset, this command can be used for 2 things. To display the names of, or to set and/or unset values of several options and parameters. These parameters being:
○ Shell options
○ Shell attributes
○ Positional parameters
view, this command starts the configuration in read-only mode.
These commands need to operate on named configurable properties. To list all the available properties, you can run the command $crc config --help.
Throughout this manual we will use the $crc config command a few times to change some properties needed for the configuration.
There is also the possibility to use the crc config command to configure the behavior of the checks that’s done by the $crc start end $crc setup commands. By default, the startup checks will stop with the process if their conditions are not met. To bypass this potential issue, you can set the value of a property that starts with skip-check or warn-check to true to skip the check or warning instead of ending up with an error.
C:\Users\[username]\$PATH>crc config get C:\Users\[username]\$PATH>crc config set C:\Users\[username]\$PATH>crc config unset C:\Users\[username]\$PATH>crc config view C:\Users\[username]\$PATH>crc config --help 

Configuring the Virtual Machine

You can use the CPUs and memory properties to configure the default number of vCPU’s and amount of memory available for the virtual machine.
To increase the number of vCPU’s available to the virtual machine use the $crc config set CPUs . Keep in mind that the default number for the CPU’s is 4 and the number of vCPU’s you wish to assign must be equal or greater than the default value.
To increase the memory available to the virtual machine, use the $crc config set memory . Keep in mind that the default number for the memory is 9216 Mebibytes and the amount of memory you wish to assign must be equal or greater than the default value.
C:\Users\[username]\$PATH>crc config set CPUs  C:\Users\[username]\$PATH>crc config set memory > 

Configuring the DNS

Window / General DNS setup

There are two domain names used by the OpenShift cluster that are managed by the CodeReady Containers, these are:
crc.testing, this is the domain for the core OpenShift services.
apps-crc.testing, this is the domain used for accessing OpenShift applications that are deployed on the cluster.
Configuring the DNS settings in Windows is done by executing the crc setup. This command automatically adjusts the DNS configuration on the system. When executing crc start additional checks to verify the configuration will be executed.

macOS DNS setup

MacOS expects the following DNS configuration for the CodeReady Containers
● The CodeReady Containers creates a file that instructs the macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine. This file is created at /etc/resolvetesting.
● The oc binary requires the following CodeReady Containers entry to function properly, api.crc.testing adds an entry to /etc/hosts pointing at the VM IPaddress.

Linux DNS setup

CodeReady containers expect a slightly different DNS configuration. CodeReady Container expects the NetworkManager to manage networking. On Linux the NetworkManager uses dnsmasq through a configuration file, namely /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf.
To set it up properly the dnsmasq instance has to forward the requests for crc.testing and apps-crc.testing domains to “192.168.130.11”. In the /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf this will look like the following:
● Server=/crc. Testing/192.168.130.11
● Server=/apps-crc. Testing/192.168.130.11

Accessing the Openshift Cluster

Accessing the Openshift web console

To gain access to the OpenShift cluster running in the CodeReady virtual machine you need to make sure that the virtual machine is running before continuing with this chapter. The OpenShift clusters can be accessed through the OpenShift web console or the client binary(oc).
First you need to execute the $crc console command, this command will open your web browser and direct a tab to the web console. After that, you need to select the htpasswd_provider option in the OpenShift web console and log in as a developer user with the output provided by the crc start command.
It is also possible to view the password for kubeadmin and developer users by running the $crc console --credentials command. While you can access the cluster through the kubeadmin and developer users, it should be noted that the kubeadmin user should only be used for administrative tasks such as user management and the developer user for creating projects or OpenShift applications and the deployment of these applications.
C:\Users\[username]\$PATH>crc console C:\Users\[username]\$PATH>crc console --credentials 

Accessing the OpenShift cluster with oc

To gain access to the OpenShift cluster with the use of the oc command you need to complete several steps.
Step 1.
Execute the $crc oc-env command to print the command needed to add the cached oc binary to your PATH:
C:\Users\[username]\$PATH>crc oc-env 
Step 2.
Execute the printed command. The output will look something like the following:
PS C:\Users\OpenShift> crc oc-env $Env:PATH = "CC:\Users\OpenShift\.crc\bin\oc;$Env:PATH" # Run this command to configure your shell: # & crc oc-env | Invoke-Expression 
This means we have to execute* the command that the output gives us, in this case that is:
C:\Users\[username]\$PATH>crc oc-env | Invoke-Expression 
\this has to be executed every time you start; a solution is to move the oc binary to the same path as the crc binary*
To test if this step went correctly execute the following command, if it returns without errors oc is set up properly
C:\Users\[username]\$PATH>.\oc 
Step 3
Now you need to login as a developer user, this can be done using the following command:
$oc login -u developer https://api.crc.testing:6443
Keep in mind that the $crc start will provide you with the password that is needed to login with the developer user.
C:\Users\[username]\$PATH>oc login -u developer https://api.crc.testing:6443 
Step 4
The oc can now be used to interact with your OpenShift cluster. If you for instance want to verify if the OpenShift cluster Operators are available, you can execute the command
$oc get co 
Keep in mind that by default the CodeReady Containers disables the functions provided by the commands $machine-config and $monitoringOperators.
C:\Users\[username]\$PATH>oc get co 

Demonstration

Now that you are able to access the cluster, we will take you on a tour through some of the possibilities within OpenShift Container Platform.
We will start by creating a project. Within this project we will import an image, and with this image we are going to build an application. After building the application we will explain how upscaling and downscaling can be used within the created application.
As the next step we will show the user how to make changes in the network route. We also show how monitoring can be used within the platform, however within the current version of CodeReady Containers this has been disabled.
Lastly, we will show the user how to use user management within the platform.

Creating a project

To be able to create a project within the console you have to login on the cluster. If you have not yet done this, this can be done by running the command crc console in the command line and logging in with the login data from before.
When you are logged in as admin, switch to Developer. If you're logged in as a developer, you don't have to switch. Switching between users can be done with the dropdown menu top left.
Now that you are properly logged in press the dropdown menu shown in the image below, from there click on create a project.
https://preview.redd.it/ytax8qocitv51.png?width=658&format=png&auto=webp&s=72d143733f545cf8731a3cca7cafa58c6507ace2
When you press the correct button, the following image will pop up. Here you can give your project a name and description. We chose to name it CodeReady with a displayname CodeReady Container.
https://preview.redd.it/vtaxadwditv51.png?width=594&format=png&auto=webp&s=e3b004bab39fb3b732d96198ed55fdd99259f210

Importing image

The Containers in OpenShift Container Platform are based on OCI or Docker formatted images. An image is a binary that contains everything needed to run a container as well as the metadata of the requirements needed for the container.
Within the OpenShift Container Platform it’s possible to obtain images in a number of ways. There is an integrated Docker registry that offers the possibility to download new images “on the fly”. In addition, OpenShift Container Platform can use third party registries such as:
- Https://hub.docker.com/
- Https://catalog.redhat.com/software/containers/search
Within this manual we are going to import an image from the Red Hat container catalog. In this example we’ll be using MediaWiki.
Search for the application in https://catalog.redhat.com/software/containers/search

https://preview.redd.it/c4mrbs0fitv51.png?width=672&format=png&auto=webp&s=f708f0542b53a9abf779be2d91d89cf09e9d2895
Navigate to “Get this image”
Follow the steps to “create a registry service account”, after that you can copy the YAML.
https://preview.redd.it/b4rrklqfitv51.png?width=1323&format=png&auto=webp&s=7a2eb14a3a1ba273b166e03e1410f06fd9ee1968
After the YAML has been copied we will go to the topology view and click on the YAML button
https://preview.redd.it/k3qzu8dgitv51.png?width=869&format=png&auto=webp&s=b1fefec67703d0a905b00765f0047fe7c6c0735b
Then we have to paste in the YAML, put in the name, namespace and your pull secret name (which you created through your registry account) and click on create.
https://preview.redd.it/iz48kltgitv51.png?width=781&format=png&auto=webp&s=4effc12e07bd294f64a326928804d9a931e4d2bd
Run the import command within powershell
$oc import-image openshift4/mediawiki --from=registry.redhat.io/openshift4/mediawiki --confirm imagestream.image.openshift.io/mediawiki imported 

Creating and managing an application

There are a few ways to create and manage applications. Within this demonstration we’ll show how to create an application from the previously imported image.

Creating the application

To create an image with the previously imported image go back to the console and topology. From here on select container image.
https://preview.redd.it/6506ea4iitv51.png?width=869&format=png&auto=webp&s=c0231d70bb16c76cd131e6b71256e93550cc8b37
For the option image you'll want to select the “image stream tag from internal registry” option. Give the application a name and then create the deployment.
https://preview.redd.it/tk72idniitv51.png?width=813&format=png&auto=webp&s=a4e662cf7b96604d84df9d04ab9b90b5436c803c
If everything went right during the creating process you should see the following, this means that the application is successfully running.
https://preview.redd.it/ovv9l85jitv51.png?width=901&format=png&auto=webp&s=f78f350207add0b8a979b6da931ff29ffa30128c

Scaling the application

In OpenShift there is a feature called autoscaling. There are two types of application scaling, namely vertical scaling, and horizontal scaling. Vertical scaling is adding only more CPU and hard disk and is no longer supported by OpenShift. Horizontal scaling is increasing the number of machines.
One of the ways to scale an application is by increasing the number of pods. This can be done by going to a pod within the view as seen in the previous step. By either pressing the up or down arrow more pods of the same application can be added. This is similar to horizontal scaling and can result in better performance when there are a lot of active users at the same time.
https://preview.redd.it/s6i1vbcrltv51.png?width=602&format=png&auto=webp&s=e62cbeeed116ba8c55704d61a990fc0d8f3cfaa1
In the picture above we see the number of nodes and pods and how many resources those nodes and pods are using. This is something to keep in mind if you want to scale up your application, the more you scale it up, the more resources it will take up.

https://preview.redd.it/quh037wmitv51.png?width=194&format=png&auto=webp&s=5e326647b223f3918c259b1602afa1b5fbbeea94

Network

Since OpenShift Container platform is built on Kubernetes it might be interesting to know some theory about its networking. Kubernetes, on which the OpenShift Container platform is built, ensures that the Pods within OpenShift can communicate with each other via the network and assigns them their own IP address. This makes all containers within the Pod behave as if they were on the same host. By giving each pod its own IP address, pods can be treated as physical hosts or virtual machines in terms of port mapping, networking, naming, service discovery, load balancing, application configuration and migration. To run multiple services such as front-end and back-end services, OpenShift Container Platform has a built-in DNS.
One of the changes that can be made to the networking of a Pod is the Route. We’ll show you how this can be done in this demonstration.
The Route is not the only thing that can be changed and or configured. Two other options that might be interesting but will not be demonstrated in this manual are:
- Ingress controller, Within OpenShift it is possible to set your own certificate. A user must have a certificate / key pair in PEM-encoded files, with the certificate signed by a trusted authority.
- Network policies, by default all pods in a project are accessible from other pods and network locations. To isolate one or more pods in a project, it is possible to create Network Policy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete Network Policy objects within their own project.
There is a search function within the Container Platform. We’ll use this to search for the network routes and show how to add a new route.
https://preview.redd.it/8jkyhk8pitv51.png?width=769&format=png&auto=webp&s=9a8762df5bbae3d8a7c92db96b8cb70605a3d6da
You can add items that you use a lot to the navigation
https://preview.redd.it/t32sownqitv51.png?width=1598&format=png&auto=webp&s=6aab6f17bc9f871c591173493722eeae585a9232
For this example, we will add Routes to navigation.
https://preview.redd.it/pm3j7ljritv51.png?width=291&format=png&auto=webp&s=bc6fbda061afdd0780bbc72555d809b84a130b5b
Now that we’ve added Routes to the navigation, we can start the creation of the Route by clicking on “Create route”.
https://preview.redd.it/5lgecq0titv51.png?width=1603&format=png&auto=webp&s=d548789daaa6a8c7312a419393795b52da0e9f75
Fill in the name, select the service and the target port from the drop-down menu and click on Create.
https://preview.redd.it/qczgjc2uitv51.png?width=778&format=png&auto=webp&s=563f73f0dc548e3b5b2319ca97339e8f7b06c9d6
As you can see, we’ve successfully added the new route to our application.
https://preview.redd.it/gxfanp2vitv51.png?width=1588&format=png&auto=webp&s=1aae813d7ad0025f91013d884fcf62c5e7d109f1
Storage
OpenShift makes use of Persistent Storage, this type of storage uses persistent volume claims(PVC). PVC’s allow the developer to make persistent volumes without needing any knowledge about the underlying infrastructure.
Within this storage there are a few configuration options:
It is however important to know how to manually reclaim the persistent volumes, since if you delete PV the associated data will not be automatically deleted with it and therefore you cannot reassign the storage to another PV yet.
To manually reclaim the PV, you need to follow the following steps:
Step 1: Delete the PV, this can be done by executing the following command
$oc delete  
Step 2: Now you need to clean up the data on the associated storage asset
Step 3: Now you can delete the associated storage asset or if you with to reuse the same storage asset you can now create a PV with the storage asset definition.
It is also possible to directly change the reclaim policy within OpenShift, to do this you would need to follow the following steps:
Step 1: Get a list of the PVs in your cluster
$oc get pv 
This will give you a list of all the PV’s in your cluster and will display their following attributes: Name, Capacity, Accesmodes, Reclaimpolicy, Statusclaim, Storageclass, Reason and Age.
Step 2: Now choose the PV you wish to change and execute one of the following command’s, depending on your preferred policy:
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' 
In this example the reclaim policy will be changed to Retain.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}' 
In this example the reclaim policy will be changed to Recycle.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' 
In this example the reclaim policy will be changed to Delete.

Step 3: After this you can check the PV to verify the change by executing this command again:
$oc get pv 

Monitoring

Within Red Hat OpenShift there is the possibility to monitor the data that has been created by your containers, applications, and pods. To do so, click on the menu option in the top left corner. Check if you are logged in as Developer and click on “Monitoring”. Normally this function is not activated within the CodeReady containers, because it uses a lot of resources (Ram and CPU) to run.
https://preview.redd.it/an0wvn6zitv51.png?width=228&format=png&auto=webp&s=51abf8cc31bd763deb457d49514f99ee81d610ec
Once you have activated “Monitoring” you can change the “Time Range” and “Refresh Interval” in the top right corner of your screen. This will change the monitoring data on your screen.
https://preview.redd.it/e0yvzsh1jtv51.png?width=493&format=png&auto=webp&s=b2c563635cfa60ea7ce2f9c146aa994df6aa1c34
Within this function you can also monitor “Events”. These events are records of important information and are useful for monitoring and troubleshooting within the OpenShift Container Platform.
https://preview.redd.it/l90vkmp3jtv51.png?width=602&format=png&auto=webp&s=4e97f14bedaec7ededcdcda96e7823f77ced24c2

User management

According to the documentation of OpenShift is a user, an entity that interacts with the OpenShift Container Platform API. These can be a developer for developing applications or an administrator for managing the cluster. Users can be assigned to groups, which set the permissions applied to all the group’s members. For example, you can give API access to a group, which gives all members of the group API access.
There are multiple ways to create a user depending on the configured identity provider. The DenyAll identity provider is the default within OpenShift Container Platform. This default denies access for all the usernames and passwords.
First, we’re going to create a new user, the way this is done depends on the identity provider, this depends on the mapping method used as part of the identity provider configuration.
for more information on what mapping methods are and how they function:
https://docs.openshift.com/enterprise/3.1/install_config/configuring_authentication.html
With the default mapping method, the steps will be as following
$oc create user  
Next up, we’ll create an OpenShift Container Platform Identity. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider:
$oc create identity : 
The is the name of the identity provider in the master configuration. For example, the following commands create an Identity with identity provider ldap_provider and the identity provider username mediawiki_s.
$oc create identity ldap_provider:mediawiki_s 
Create a useidentity mapping for the created user and identity:
$oc create useridentitymapping :  
For example, the following command maps the identity to the user:
$oc create useridentitymapping ldap_provider:mediawiki_s mediawiki 
Now were going to assign a role to this new user, this can be done by executing the following command:
$oc create clusterrolebinding  \ --clusterrole= --user= 
There is a --clusterrole option that can be used to give the user a specific role, like a cluster user with admin privileges. The cluster admin has access to all files and is able to manage the access level of other users.
Below is an example of the admin clusterrole command:
$oc create clusterrolebinding registry-controller \ --clusterrole=cluster-admin --user=admin 

What did you achieve?

If you followed all the steps within this manual you now should have a functioning Mediawiki Application running on your own CodeReady Containers. During the installation of this application on CodeReady Containers you have learned how to do the following things:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying an application
● Creating new users
With these skills you’ll be able to set up your own Container Platform environment and host applications of your choosing.

Troubleshooting

Nameserver
There is the possibility that your CodeReady container can't connect to the internet due to a Nameserver error. When this is encountered a working fix for us was to stop the machine and then start the CRC machine with the following command:
C:\Users\[username]\$PATH>crc start -n 1.1.1.1 
Hyper-V admin
Should you run into a problem with Hyper-V it might be because your user is not an admin and therefore can’t access the Hyper-V admin user group.
  1. Click Start > Control Panel > Administration Tools > Computer Management. The Computer Management window opens.
  2. Click System Tools > Local Users and Groups > Groups. The list of groups opens.
  3. Double-click the Hyper-V Administrators group. The Hyper-V Administrators Properties window opens.
  4. Click Add. The Select Users or Groups window opens.
  5. In the Enter the object names to select field, enter the user account name to whom you want to assign permissions, and then click OK.
  6. Click Apply, and then click OK.

Terms and definitions

These terms and definitions will be expanded upon, below you can see an example of how this is going to look like together with a few terms that will require definitions.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Openshift is based on Kubernetes.
Clusters are a collection of multiple nodes which communicate with each other to perform a set of operations.
Containers are the basic units of OpenShift applications. These container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources.
CodeReady Container is a minimal, preconfigured cluster that is used for development and testing purposes.
CodeReady Workspaces uses Kubernetes and containers to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.

Sources

  1. https://www.ibm.com/support/knowledgecenteen/SSMKFH/com.ibm.apmaas.doc/install/hyperv_config_add_nonadmin_user_hyperv_usergroup.html
  2. https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/
  3. https://docs.openshift.com/container-platform/3.11/admin_guide/manage_users.html
submitted by Groep6HHS to openshift [link] [comments]

MAME 0.223

MAME 0.223

MAME 0.223 has finally arrived, and what a release it is – there’s definitely something for everyone! Starting with some of the more esoteric additions, Linus Åkesson’s AVR-based hardware chiptune project and Power Ninja Action Challenge demos are now supported. These demos use minimal hardware to generate sound and/or video, relying on precise CPU timings to work. With this release, every hand-held LCD game from Nintendo’s Game & Watch and related lines is supported in MAME, with Donkey Kong Hockey bringing up the rear. Also of note is the Bassmate Computer fishing aid, made by Nintendo and marketed by Telko and other companies, which is clearly based on the dual-screen Game & Watch design. The steady stream of TV games hasn’t stopped, with a number of French releases from Conny/VideoJet among this month’s batch.
For the first time ever, games running on the Barcrest MPU4 video system are emulated well enough to be playable. Titles that are now working include several games based on the popular British TV game show The Crystal Maze, Adders and Ladders, The Mating Game, and Prize Tetris. In a clear win for MAME’s modular architecture, the breakthrough came through the discovery of a significant flaw in our Motorola MC6840 Programmable Timer Module emulation that was causing issues for the Fairlight CMI IIx synthesiser. In the same manner, the Busicom 141-PF desk calculator is now working, thanks to improvements made to Intel 4004 CPU emulation that came out of emulating the INTELLEC 4 development system and the prototype 4004-based controller board for Flicker pinball. The Busicom 141-PF is historically significant, being the first application of Intel’s first microprocessor.
Fans of classic vector arcade games are in for a treat this month. Former project coordinator Aaron Giles has contributed netlist-based sound emulation for thirteen Cinematronics vector games: Space War, Barrier, Star Hawk, Speed Freak, Star Castle, War of the Worlds, Sundance, Tail Gunner, Rip Off, Armor Attack, Warrior, Solar Quest and Boxing Bugs. This resolves long-standing issues with the previous simulation based on playing recorded samples. Colin Howell has also refined the sound emulation for Midway’s 280-ZZZAP and Gun Fight.
V.Smile joystick inputs are now working for all dumped cartridges, and with fixes for ROM bank selection the V.Smile Motion software is also usable. The accelerometer-based V.Smile Motion controller is not emulated, but the software can all be used with the standard V.Smile joystick controller. Another pair of systems with inputs that now work is the original Macintosh (128K/512K/512Ke) and Macintosh Plus. These systems’ keyboards are now fully emulated, including the separate numeric keypad available for the original Macintosh, the Macintosh Plus keyboard with integrated numeric keypad, and a few European ISO layout keyboards for the original Macintosh. There are still some emulation issues, but you can play Beyond Dark Castle with MAME’s Macintosh Plus emulation again.
In other home computer emulation news, MAME’s SAM Coupé driver now supports a number of peripherals that connect to the rear expansion port, a software list containing IRIX hard disk installations for SGI MIPS workstations has been added, and tape loading now works for the Specialist system (a DIY computer designed in the USSR).
Of course, there’s far more to enjoy, and you can read all about it in the whatsnew.txt file, or get the source and 64-bit Windows binary packages from the download page. (For brevity, promoted V.Smile software list entries and new Barcrest MPU4 clones made up from existing dumps have been omitted here.)

MAME Testers Bugs Fixed

New working machines

New working clones

Machines promoted to working

Clones promoted to working

New machines marked as NOT_WORKING

New clones marked as NOT_WORKING

New working software list additions

Software list items promoted to working

New NOT_WORKING software list additions

Merged pull requests

submitted by cuavas to emulation [link] [comments]

Version Control in Game Development: 10 Vague Reasons to Use It

Version Control in Game Development: 10 Vague Reasons to Use It
Whether you’re a AAA development shop or an indie programmer, building a game will surely take more than just a couple of weekends. Many things can happen between the inception of the game and the time it will be released. To track and manage these changes, developers use version (source) control. Let's talk about version control, branching, and how to select the best version control system.

https://preview.redd.it/br064yidj0z51.jpg?width=2190&format=pjpg&auto=webp&s=16b91701114c2e185a7e33bde1bebf2634cb396e
The software development process is a long and arduous road. Changes might be introduced to the game mechanics, the admin part of the game, or practically anywhere, especially, if you develop a GaaS product.
These changes need to be tracked. Indeed, you don’t want to simply copy the entire folder of the game project and save it under a different name (like mycoolgame_v02). You will need version management. That’s what version control systems are for.

What is version control?

Version control is the practice of tracking and managing changes to the code base. Version control systems provide a running history of how the code changes. Using version control tools also helps to resolve conflicts when merging contributions from multiple sources.

What is source control?

Source control and version control are practically interchangeable, but to put a fine point to it, version control is a more general term. Source control systems typically manage mostly textual data — source control typically means source code or program code. On the other hand, version control refers not only to the source code but also to the other assets of the game app, like images, audio, and video resources.

Branching

When you think of a branch, you’d typically picture a fork-like structure. Initially, there’s only one path, but then the paths diverge. That’s essentially what a branch is in source control lingo.
As you build your game app and expose it to testers, QA, and other stakeholders, they will give input that may force you to introduce changes to the game’s source. Most of the time, the changes will be small, but the changes will sometimes be massive. These large changes are inflection points to the development process. This is typically where you decide to branch.
The purpose of branching in version control is to achieve code isolation. You’re branching probably because the new branch represents the next version of the game, or it could be something smaller, like “let’s fix bug number 12345”. Whatever branching method you choose, you’ll need a version control.

https://preview.redd.it/693agxrej0z51.png?width=640&format=png&auto=webp&s=1a9672b8137f9a53968d6b4159269559b67db644

Why use version control in game projects?

#1 - Code backup

Source control, especially a remote repository, is a backup for your code. Indeed, you don’t want your hard drive to be a single point of failure. Do you? What happens to 10 months of coding work if the drive gets fried? What if your server dies? Do you have an automated backup?

#2 - Better team collaboration

Share the code with other contributors and still be in sync with each other. If you’re not using source control, how will you work with other developers? Do you really want to use Dropbox or Google Drive to share source codes? How will you track each other’s changes? Version control systems take care of synching and resolving conflicts or differences with codes from multiple contributors.

#3 - Roll back to the previous version

Version control systems are a retreat strategy. Have you ever made breaking changes to the code and realized what a colossal mistake it was? If you ever want to go back, it’s a cinch to do that in a version control system.

#4 - Experiments with zero risks

It makes experimentation easy. Do you want to try something radical, but you don’t want to clutter or pollute your codebase? Branch. If the idea doesn’t pan out, just leave the branch and go back to the trunk

#5 - Full audit trail

Provides an audit trail for the codebase. You can go back to previous versions of the code to find out when and where the bugs first crept in.

#6 - Better release management

Monitor the progress of the code. You can see how much work is being done, by who, where, and when.

#7 - Code comparison and analysis

You can compare versions of your code. When you learn how to use diffing techniques, you can compare versions of your code in a side-by-side fashion.

#8 - Manage different versions of the game

Maintain multiple versions of your product. Branching strategies should help you maintain different versions of your game/product. It is a common practice for the developers to have at least a production version (free from bugs, well-tested) and a work-in-progress development version.

#9 - Scaling the game projects and companies

Are you an indie developer? Or you are employed by one of the game giants - Ubisoft, Tencent or King? Whatever project you are involved into at the moment, you may come to the point when you’ll need to deal with more teammates, run more tests, and fix more bugs. Version control software is an indispensable part of your game growth.

#10 - Facilitate the continuous game updates

Thinking about the previous point, how often do you plan to release your game updates? Do you plan to do it once a year, monthly or weekly?
The more frequently you update your game, the more likely you’ll need to do the feature branching or release branching to minimize bugs and achieve flawless user experience. Not to mention if you select the games-as-a-service model.

What to consider when selecting version control systems

If you’re about to start a project and deciding which version control system to use, you might want to consider the following.
  1. Ability to support game projects. Some version control platforms are better suited for application development where most of the assets are textual (source codes), and some are better at handling binaryfiles (audio, video, image assets). Make sure your source control system can handle both.
  2. User experience. The source control platform must be supported by tools. If the platform is a CLI-only (command-line interface), it might be popular amongst developers, but non-dev people (artists, designers) might have difficulty using it. The tools have to be friendly to everybody.
  3. Ecosystem of tools and integrations. Does your CI/CD platform support it? Can Jenkins pull from this repo? Your version control system must play nice with the CI/CD apps in the age of continuous integration. Other questions to ask might be;
  • Can you hook it up with Unreal/Unity?
  • Do our IDEs support it?
  • Is it easy to connect it with Trello? Jira?
  1. Hosted or on-premise. Are there companies offering a hosted solution for this version control system? Or do you have to provision a server yourself and find a data center where to park it? Hosting an in-premise source control system has advantages. Still, it also carries lots of baggage like IT personnel cost, capital cost, depreciation cost, etc. In contrast, a hosted solution lets you avoid all those in exchange for a fee.
  2. Single file versioning ability. Can you check out only a single file, or do you have to download everything? Some version control systems force developers to download all the updates from a central server before you can share or see any change. This might be sensible for application code, but it may not make sense for a game app where some of the assets are large binary files.
  3. Access control. Does the system let you control who has access to what? How granular is the control? Can you assign rights down to the file level? Can you assign read but not write privileges to users for particular files?
Some common version control systems are better at handling some of the things we stated above, and some are better at managing others. You may need to do a comparison matrix to select amongst the version control options.

If you ask an application developer for recommendation, I’m almost sure they’ll tell you Git, Subversion, or CVS. These are heavy favorites of app devs. They’re open-source software and great at handling textual data, but they may be ill-suited for a game development project because of the way they handle BLOBS or binary files (which a game app has lots of).
If you ask a game developer, you’ll get a different recommendation; game development projects have very different version control needs than application development projects. Should it be an independent software or a built-in feature in your database or CMS platform?
How many people are involved in game development? How many databases? How are localization and content delivery done?
Gridly features the built-in version control, which enables branching of the content datasets, tweak them in isolation and merge back to the master branch. Sign up for free and make your first branch.
submitted by LocalizeDirectAB to u/LocalizeDirectAB [link] [comments]

Best Practices for A C Programmer

Hi all,
Long time C programmer here, primarily working in the embedded industry (particularly involving safety-critical code). I've been a lurker on this sub for a while but I'm hoping to ask some questions regarding best practices. I've been trying to start using c++ on a lot of my work - particularly taking advantage of some of the code-reuse and power of C++ (particularly constexpr, some loose template programming, stronger type checking, RAII etc).
I would consider myself maybe an 8/10 C programmer but I would conservatively maybe rate myself as 3/10 in C++ (with 1/10 meaning the absolute minmum ability to write, google syntax errata, diagnose, and debug a program). Perhaps I should preface the post that I am more than aware that C is by no means a subset of C++ and there are many language constructs permitted in one that are not in the other.
In any case, I was hoping to get a few answers regarding best practices for c++. Keep in mind that the typical target device I work with does not have a heap of any sort and so a lot of the features that constitute "modern" C++ (non-initialization use of dynamic memory, STL meta-programming, hash-maps, lambdas (as I currently understand them) are a big no-no in terms of passing safety review.

When do I overload operators inside a class as opposed to outisde?

... And what are the arguments foagainst each paradigm? See below:
/* Overload example 1 (overloaded inside class) */ class myclass { private: unsigned int a; unsigned int b; public: myclass(void); unsigned int get_a(void) const; bool operator==(const myclass &rhs); }; bool myclass::operator==(const myclass &rhs) { if (this == &rhs) { return true; } else { if (this->a == rhs.a && this->b == rhs.b) { return true; } } return false; } 
As opposed to this:
/* Overload example 2 (overloaded outside of class) */ class CD { private: unsigned int c; unsigned int d; public: CD(unsigned int _c, unsigned int _d) : d(_d), c(_c) {}; /* CTOR */ unsigned int get_c(void) const; /* trival getters */ unsigned int get_d(void) const; /* trival getters */ }; /* In this implementation, If I don't make the getters (get_c, get_d) constant, * it won't compile despite their access specifiers being public. * * It seems like the const keyword in C++ really should be interpretted as * "read-only AND no side effects" rather than just read only as in C. * But my current understanding may just be flawed... * * My confusion is as follows: The function args are constant references * so why do I have to promise that the function methods have no side-effects on * the private object members? Is this something specific to the == operator? */ bool operator==(const CD & lhs, const CD & rhs) { if(&lhs == &rhs) return true; else if((lhs.get_c() == rhs.get_c()) && (lhs.get_d() == rhs.get_d())) return true; return false; } 
When should I use the example 1 style over the example 2 style? What are the pros and cons of 1 vs 2?

What's the deal with const member functions?

This is more of a subtle confusion but it seems like in C++ the const keyword means different things base on the context in which it is used. I'm trying to develop a relatively nuanced understanding of what's happening under the hood and I most certainly have misunderstood many language features, especially because C++ has likely changed greatly in the last ~6-8 years.

When should I use enum classes versus plain old enum?

To be honest I'm not entirely certain I fully understand the implications of using enum versus enum class in C++.
This is made more confusing by the fact that there are subtle differences between the way C and C++ treat or permit various language constructs (const, enum, typedef, struct, void*, pointer aliasing, type puning, tentative declarations).
In C, enums decay to integer values at compile time. But in C++, the way I currently understand it, enums are their own type. Thus, in C, the following code would be valid, but a C++ compiler would generate a warning (or an error, haven't actually tested it)
/* Example 3: (enums : Valid in C, invalid in C++ ) */ enum COLOR { RED, BLUE, GREY }; enum PET { CAT, DOG, FROG }; /* This is compatible with a C-style enum conception but not C++ */ enum SHAPE { BALL = RED, /* In C, these work because int = int is valid */ CUBE = DOG, }; 
If my understanding is indeed the case, do enums have an implicit namespace (language construct, not the C++ keyword) as in C? As an add-on to that, in C++, you can also declare enums as a sort of inherited type (below). What am I supposed to make of this? Should I just be using it to reduce code size when possible (similar to gcc option -fuse-packed-enums)? Since most processors are word based, would it be more performant to use the processor's word type than the syntax specified above?
/* Example 4: (Purely C++ style enums, use of enum class/ enum struct) */ /* C++ permits forward enum declaration with type specified */ enum FRUIT : int; enum VEGGIE : short; enum FRUIT /* As I understand it, these are ints */ { APPLE, ORANGE, }; enum VEGGIE /* As I understand it, these are shorts */ { CARROT, TURNIP, }; 
Complicating things even further, I've also seen the following syntax:
/* What the heck is an enum class anyway? When should I use them */ enum class THING { THING1, THING2, THING3 }; /* And if classes and structs are interchangable (minus assumptions * about default access specifiers), what does that mean for * the following definition? */ enum struct FOO /* Is this even valid syntax? */ { FOO1, FOO2, FOO3 }; 
Given that enumerated types greatly improve code readability, I've been trying to wrap my head around all this. When should I be using the various language constructs? Are there any pitfalls in a given method?

When to use POD structs (a-la C style) versus a class implementation?

If I had to take a stab at answering this question, my intuition would be to use POD structs for passing aggregate types (as in function arguments) and using classes for interface abstractions / object abstractions as in the example below:
struct aggregate { unsigned int related_stuff1; unsigned int related_stuff2; char name_of_the_related_stuff[20]; }; class abstraction { private: unsigned int private_member1; unsigned int private_member2; protected: unsigned int stuff_for_child_classes; public: /* big 3 */ abstraction(void); abstraction(const abstraction &other); ~abstraction(void); /* COPY semantic ( I have a better grasp on this abstraction than MOVE) */ abstraction &operator=(const abstraction &rhs); /* MOVE semantic (subtle semantics of which I don't full grasp yet) */ abstraction &operator=(abstraction &&rhs); /* * I've seen implentations of this that use a copy + swap design pattern * but that relies on std::move and I realllllly don't get what is * happening under the hood in std::move */ abstraction &operator=(abstraction rhs); void do_some_stuff(void); /* member function */ }; 
Is there an accepted best practice for thsi or is it entirely preference? Are there arguments for only using classes? What about vtables (where byte-wise alignment such as device register overlays and I have to guarantee placement of precise members)

Is there a best practice for integrating C code?

Typically (and up to this point), I've just done the following:
/* Example 5 : Linking a C library */ /* Disable name-mangling, and then give the C++ linker / * toolchain the compiled * binaries */ #ifdef __cplusplus extern "C" { #endif /* C linkage */ #include "device_driver_header_or_a_c_library.h" #ifdef __cplusplus } #endif /* C linkage */ /* C++ code goes here */ 
As far as I know, this is the only way to prevent the C++ compiler from generating different object symbols than those in the C header file. Again, this may just be ignorance of C++ standards on my part.

What is the proper way to selectively incorporate RTTI without code size bloat?

Is there even a way? I'm relatively fluent in CMake but I guess the underlying question is if binaries that incorporate RTTI are compatible with those that dont (and the pitfalls that may ensue when mixing the two).

What about compile time string formatting?

One of my biggest gripes about C (particularly regarding string manipulation) frequently (especially on embedded targets) variadic arguments get handled at runtime. This makes string manipulation via the C standard library (printf-style format strings) uncomputable at compile time in C.
This is sadly the case even when the ranges and values of paramers and formatting outputs is entirely known beforehand. C++ template programming seems to be a big thing in "modern" C++ and I've seen a few projects on this sub that use the turing-completeness of the template system to do some crazy things at compile time. Is there a way to bypass this ABI limitation using C++ features like constexpr, templates, and lambdas? My (somewhat pessimistic) suspicion is that since the generated assembly must be ABI-compliant this isn't possible. Is there a way around this? What about the std::format stuff I've been seeing on this sub periodically?

Is there a standard practice for namespaces and when to start incorporating them?

Is it from the start? Is it when the boundaries of a module become clearly defined? Or is it just personal preference / based on project scale and modularity?
If I had to make a guess it would be at the point that you get a "build group" for a project (group of source files that should be compiled together) as that would loosely define the boundaries of a series of abstractions APIs you may provide to other parts of a project.
--EDIT-- markdown formatting
submitted by aWildElectron to cpp [link] [comments]

First Time Going Through Coding Interviews?

This post draws on my personal experiences and challenges over the past term at school, which I entered with hardly any knowledge of DSA (data structures and algorithms) and problem-solving strategies. As a self-taught programmer, I was a lot more familiar and comfortable with general programming, such as object-oriented programming, than with the problem-solving skills required in DSA questions.
This post reflects my journey throughout the term and the resources I turned to in order to quickly improve for my coding interview.
Here're some common questions and answers
What's the interview process like at a tech company?
Good question. It's actually pretty different from most other companies.

(What It's Like To Interview For A Coding Job

First time interviewing for a tech job? Not sure what to expect? This article is for you.

Here are the usual steps:

  1. First, you’ll do a non-technical phone screen.
  2. Then, you’ll do one or a few technical phone interviews.
  3. Finally, the last step is an onsite interview.
Some companies also throw in a take-home code test—sometimes before the technical phone interviews, sometimes after.
Let’s walk through each of these steps.

The non-technical phone screen

This first step is a quick call with a recruiter—usually just 10–20 minutes. It's very casual.
Don’t expect technical questions. The recruiter probably won’t be a programmer.
The main goal is to gather info about your job search. Stuff like:

  1. Your timeline. Do you need to sign an offer in the next week? Or are you trying to start your new job in three months?
  2. What’s most important to you in your next job. Great team? Flexible hours? Interesting technical challenges? Room to grow into a more senior role?
  3. What stuff you’re most interested in working on. Front end? Back end? Machine learning?
Be honest about all this stuff—that’ll make it easier for the recruiter to get you what you want.
One exception to that rule: If the recruiter asks you about your salary expectations on this call, best not to answer. Just say you’d rather talk about compensation after figuring out if you and the company are a good fit. This’ll put you in a better negotiating position later on.

The technical phone interview(s)

The next step is usually one or more hour-long technical phone interviews.
Your interviewer will call you on the phone or tell you to join them on Skype or Google Hangouts. Make sure you can take the interview in a quiet place with a great internet connection. Consider grabbing a set of headphones with a good microphone or a bluetooth earpiece. Always test your hardware beforehand!
The interviewer will want to watch you code in real time. Usually that means using a web-based code editor like Coderpad or collabedit. Run some practice problems in these tools ahead of time, to get used to them. Some companies will just ask you to share your screen through Google Hangouts or Skype.
Turn off notifications on your computer before you get started—especially if you’re sharing your screen!
Technical phone interviews usually have three parts:

  1. Beginning chitchat (5–10 minutes)
  2. Technical challenges (30–50 minutes)
  3. Your turn to ask questions (5–10 minutes)
The beginning chitchat is half just to help your relax, and half actually part of the interview. The interviewer might ask some open-ended questions like:

  1. Tell me about yourself.
  2. Tell me about something you’ve built that you’re particularly proud of.
  3. I see this project listed on your resume—tell me more about that.
You should be able to talk at length about the major projects listed on your resume. What went well? What didn’t? How would you do things differently now?
Then come the technical challenges—the real meet of the interview. You’ll spend most of the interview on this. You might get one long question, or several shorter ones.
What kind of questions can you expect? It depends.
Startups tend to ask questions aimed towards building or debugging code. (“Write a function that takes two rectangles and figures out if they overlap.”). They’ll care more about progress than perfection.
Larger companies will want to test your general know-how of data structures and algorithms (“Write a function that checks if a binary tree is ‘balanced’ in O(n)O(n) ↴ time.”). They’ll care more about how you solve and optimize a problem.
With these types of questions, the most important thing is to be communicating with your interviewer throughout. You'll want to "think out loud" as you work through the problem. For more info, check out our more detailed step-by-step tips for coding interviews.
If the role requires specific languages or frameworks, some companies will ask trivia-like questions (“In Python, what’s the ‘global interpreter lock’?”).
After the technical questions, your interviewer will open the floor for you to ask them questions. Take some time before the interview to comb through the company’s website. Think of a few specific questions about the company or the role. This can really make you stand out.
When you’re done, they should give you a timeframe on when you’ll hear about next steps. If all went well, you’ll either get asked to do another phone interview, or you’ll be invited to their offices for an onsite.

The onsite interview

An onsite interview happens in person, at the company’s office. If you’re not local, it’s common for companies to pay for a flight and hotel room for you.
The onsite usually consists of 2–6 individual, one-on-one technical interviews (usually in a small conference room). Each interview will be about an hour and have the same basic form as a phone screen—technical questions, bookended by some chitchat at the beginning and a chance for you to ask questions at the end.
The major difference between onsite technical interviews and phone interviews though: you’ll be coding on a whiteboard.
This is awkward at first. No autocomplete, no debugging tools, no delete button…ugh. The good news is, after some practice you get used to it. Before your onsite, practice writing code on a whiteboard (in a pinch, a pencil and paper are fine). Some tips:

  1. Start in the top-most left corner of the whiteboard. This gives you the most room. You’ll need more space than you think.
  2. Leave a blank line between each line as you write your code. Makes it much easier to add things in later.
  3. Take an extra second to decide on your variable names. Don’t rush this part. It might seem like a waste of time, but using more descriptive variable names ultimately saves you time because it makes you less likely to get confused as you write the rest of your code.
If a technical phone interview is a sprint, an onsite is a marathon. The day can get really long. Best to keep it open—don’t make other plans for the afternoon or evening.
When things go well, you’ wrap-up by chatting with the CEO or some other director. This is half an interview, half the company trying to impress you. They may invite you to get drinks with the team after hours.
All told, a long day of onsite interviews could look something like this:

If they let you go after just a couple interviews, it’s usually a sign that they’re going to pass on you. That’s okay—it happens!
There are are a lot of easy things you can do the day before and morning of your interview to put yourself in the best possible mindset. Check out our piece on what to do in the 24 hours before your onsite coding interview.

The take-home code test

Code tests aren’t ubiquitous, but they seem to be gaining in popularity. They’re far more common at startups, or places where your ability to deliver right away is more important than your ability to grow.
You’ll receive a description of an app or service, a rough time constraint for writing your code, and a deadline for when to turn it in. The deadline is usually negotiable.
Here's an example problem:
Write a basic “To-Do” app. Unit test the core functionality. As a bonus, add a “reminders” feature. Try to spend no more than 8 hours on it, and send in what you have by Friday with a small write-up.
Take a crack at the “bonus” features if they include any. At the very least, write up how you would implement it.
If they’re hiring for people with knowledge of a particular framework, they might tell you what tech to use. Otherwise, it’ll be up to you. Use what you’re most comfortable with. You want this code to show you at your best.
Some places will offer to pay you for your time. It's rare, but some places will even invite you to work with them in their office for a few days, as a "trial.")
Do I need to know this "big O" stuff?
Big O notation is the language we use for talking about the efficiency of data structures and algorithms.
Will it come up in your interviews? Well, it depends. There are different types of interviews.
There’s the classic algorithmic coding interview, sometimes called the “Google-style whiteboard interview.” It’s focused on data structures and algorithms (queues and stacks, binary search, etc).
That’s what our full course prepares you for. It's how the big players interview. Google, Facebook, Amazon, Microsoft, Oracle, LinkedIn, etc.
For startups and smaller shops, it’s a mixed bag. Most will ask at least a few algorithmic questions. But they might also include some role-specific stuff, like Java questions or SQL questions for a backend web engineer. They’ll be especially interested in your ability to ship code without much direction. You might end up doing a code test or pair-programming exercise instead of a whiteboarding session.
To make sure you study for the right stuff, you should ask your recruiter what to expect. Send an email with a question like, “Is this interview going to cover data structures and algorithms? Or will it be more focused around coding in X language.” They’ll be happy to tell you.
If you've never learned about data structures and algorithms, or you're feeling a little rusty, check out our Intuitive Guide to Data Structures and Algorithms.
Which programming language should I use?
Companies usually let you choose, in which case you should use your most comfortable language. If you know a bunch of languages, prefer one that lets you express more with fewer characters and fewer lines of code, like Python or Ruby. It keeps your whiteboard cleaner.
Try to stick with the same language for the whole interview, but sometimes you might want to switch languages for a question. E.g., processing a file line by line will be far easier in Python than in C++.
Sometimes, though, your interviewer will do this thing where they have a pet question that’s, for example, C-specific. If you list C on your resume, they’ll ask it.
So keep that in mind! If you’re not confident with a language, make that clear on your resume. Put your less-strong languages under a header like ‘Working Knowledge.’
What should I wear?
A good rule of thumb is to dress a tiny step above what people normally wear to the office. For most west coast tech companies, the standard digs are just jeans and a t-shirt. Ask your recruiter what the office is like if you’re worried about being too casual.
Should I send a thank-you note?
Thank-you notes are nice, but they aren’t really expected. Be casual if you send one. No need for a hand-calligraphed note on fancy stationery. Opt for a short email to your recruiter or the hiring manager. Thank them for helping you through the process, and ask them to relay your thanks to your interviewers.
1) Coding Interview Tips
How to get better at technical interviews without practicing
Chitchat like a pro.
Before diving into code, most interviewers like to chitchat about your background. They're looking for:

You should have at least one:

Nerd out about stuff. Show you're proud of what you've done, you're amped about what they're doing, and you have opinions about languages and workflows.
Communicate.
Once you get into the coding questions, communication is key. A candidate who needed some help along the way but communicated clearly can be even better than a candidate who breezed through the question.
Understand what kind of problem it is. There are two types of problems:

  1. Coding. The interviewer wants to see you write clean, efficient code for a problem.
  2. Chitchat. The interviewer just wants you to talk about something. These questions are often either (1) high-level system design ("How would you build a Twitter clone?") or (2) trivia ("What is hoisting in Javascript?"). Sometimes the trivia is a lead-in for a "real" question e.g., "How quickly can we sort a list of integers? Good, now suppose instead of integers we had . . ."
If you start writing code and the interviewer just wanted a quick chitchat answer before moving on to the "real" question, they'll get frustrated. Just ask, "Should we write code for this?"
Make it feel like you're on a team. The interviewer wants to know what it feels like to work through a problem with you, so make the interview feel collaborative. Use "we" instead of "I," as in, "If we did a breadth-first search we'd get an answer in O(n)O(n) time." If you get to choose between coding on paper and coding on a whiteboard, always choose the whiteboard. That way you'll be situated next to the interviewer, facing the problem (rather than across from her at a table).
Think out loud. Seriously. Say, "Let's try doing it this way—not sure yet if it'll work." If you're stuck, just say what you're thinking. Say what might work. Say what you thought could work and why it doesn't work. This also goes for trivial chitchat questions. When asked to explain Javascript closures, "It's something to do with scope and putting stuff in a function" will probably get you 90% credit.
Say you don't know. If you're touching on a fact (e.g., language-specific trivia, a hairy bit of runtime analysis), don't try to appear to know something you don't. Instead, say "I'm not sure, but I'd guess $thing, because...". The because can involve ruling out other options by showing they have nonsensical implications, or pulling examples from other languages or other problems.
Slow the eff down. Don't confidently blurt out an answer right away. If it's right you'll still have to explain it, and if it's wrong you'll seem reckless. You don't win anything for speed and you're more likely to annoy your interviewer by cutting her off or appearing to jump to conclusions.
Get unstuck.
Sometimes you'll get stuck. Relax. It doesn't mean you've failed. Keep in mind that the interviewer usually cares more about your ability to cleverly poke the problem from a few different angles than your ability to stumble into the correct answer. When hope seems lost, keep poking.
Draw pictures. Don't waste time trying to think in your head—think on the board. Draw a couple different test inputs. Draw how you would get the desired output by hand. Then think about translating your approach into code.
Solve a simpler version of the problem. Not sure how to find the 4th largest item in the set? Think about how to find the 1st largest item and see if you can adapt that approach.
Write a naive, inefficient solution and optimize it later. Use brute force. Do whatever it takes to get some kind of answer.
Think out loud more. Say what you know. Say what you thought might work and why it won't work. You might realize it actually does work, or a modified version does. Or you might get a hint.
Wait for a hint. Don't stare at your interviewer expectantly, but do take a brief second to "think"—your interviewer might have already decided to give you a hint and is just waiting to avoid interrupting.
Think about the bounds on space and runtime. If you're not sure if you can optimize your solution, think about it out loud. For example:

Get your thoughts down.
It's easy to trip over yourself. Focus on getting your thoughts down first and worry about the details at the end.
Call a helper function and keep moving. If you can't immediately think of how to implement some part of your algorithm, big or small, just skip over it. Write a call to a reasonably-named helper function, say "this will do X" and keep going. If the helper function is trivial, you might even get away with never implementing it.
Don't worry about syntax. Just breeze through it. Revert to English if you have to. Just say you'll get back to it.
Leave yourself plenty of room. You may need to add code or notes in between lines later. Start at the top of the board and leave a blank line between each line.
Save off-by-one checking for the end. Don't worry about whether your for loop should have "<<" or "<=<=." Write a checkmark to remind yourself to check it at the end. Just get the general algorithm down.
Use descriptive variable names. This will take time, but it will prevent you from losing track of what your code is doing. Use names_to_phone_numbers instead of nums. Imply the type in the name. Functions returning booleans should start with "is_*". Vars that hold a list should end with "s." Choose standards that make sense to you and stick with them.
Clean up when you're done.
Walk through your solution by hand, out loud, with an example input. Actually write down what values the variables hold as the program is running—you don't win any brownie points for doing it in your head. This'll help you find bugs and clear up confusion your interviewer might have about what you're doing.
Look for off-by-one errors. Should your for loop use a "<=<=" instead of a "<<"?
Test edge cases. These might include empty sets, single-item sets, or negative numbers. Bonus: mention unit tests!
Don't be boring. Some interviewers won't care about these cleanup steps. If you're unsure, say something like, "Then I'd usually check the code against some edge cases—should we do that next?"
Practice.
In the end, there's no substitute for running practice questions.
Actually write code with pen and paper. Be honest with yourself. It'll probably feel awkward at first. Good. You want to get over that awkwardness now so you're not fumbling when it's time for the real interview.

2) Tricks For Getting Unstuck During a Coding Interview
Getting stuck during a coding interview is rough.
If you weren’t in an interview, you might take a break or ask Google for help. But the clock is ticking, and you don’t have Google.
You just have an empty whiteboard, a smelly marker, and an interviewer who’s looking at you expectantly. And all you can think about is how stuck you are.
You need a lifeline for these moments—like a little box that says “In Case of Emergency, Break Glass.”
Inside that glass box? A list of tricks for getting unstuck. Here’s that list of tricks.
When you’re stuck on getting started
1) Write a sample input on the whiteboard and turn it into the correct output "by hand." Notice the process you use. Look for patterns, and think about how to implement your process in code.
Trying to reverse a string? Write “hello” on the board. Reverse it “by hand”—draw arrows from each character’s current position to its desired position.
Notice the pattern: it looks like we’re swapping pairs of characters, starting from the outside and moving in. Now we’re halfway to an algorithm.
2) Solve a simpler version of the problem. Remove or simplify one of the requirements of the problem. Once you have a solution, see if you can adapt that approach for the original question.
Trying to find the k-largest element in a set? Walk through finding the largest element, then the second largest, then the third largest. Generalizing from there to find the k-largest isn’t so bad.
3) Start with an inefficient solution. Even if it feels stupidly inefficient, it’s often helpful to start with something that’ll return the right answer. From there, you just have to optimize your solution. Explain to your interviewer that this is only your first idea, and that you suspect there are faster solutions.
Suppose you were given two lists of sorted numbers and asked to find the median of both lists combined. It’s messy, but you could simply:

  1. Concatenate the arrays together into a new array.
  2. Sort the new array.
  3. Return the value at the middle index.
Notice that you could’ve also arrived at this algorithm by using trick (2): Solve a simpler version of the problem. “How would I find the median of one sorted list of numbers? Just grab the item at the middle index. Now, can I adapt that approach for getting the median of two sorted lists?”
When you’re stuck on finding optimizations
1) Look for repeat work. If your current solution goes through the same data multiple times, you’re doing unnecessary repeat work. See if you can save time by looking through the data just once.
Say that inside one of your loops, there’s a brute-force operation to find an element in an array. You’re repeatedly looking through items that you don’t have to. Instead, you could convert the array to a lookup table to dramatically improve your runtime.
2) Look for hints in the specifics of the problem. Is the input array sorted? Is the binary tree balanced? Details like this can carry huge hints about the solution. If it didn’t matter, your interviewer wouldn’t have brought it up. It’s a strong sign that the best solution to the problem exploits it.
Suppose you’re asked to find the first occurrence of a number in a sorted array. The fact that the array is sorted is a strong hint—take advantage of that fact by using a binary search.

Sometimes interviewers leave the question deliberately vague because they want you to ask questions to unearth these important tidbits of context. So ask some questions at the beginning of the problem.
3) Throw some data structures at the problem. Can you save time by using the fast lookups of a hash table? Can you express the relationships between data points as a graph? Look at the requirements of the problem and ask yourself if there’s a data structure that has those properties.
4) Establish bounds on space and runtime. Think out loud about the parameters of the problem. Try to get a sense for how fast your algorithm could possibly be:

When All Else Fails
1) Make it clear where you are. State what you know, what you’re trying to do, and highlight the gap between the two. The clearer you are in expressing exactly where you’re stuck, the easier it is for your interviewer to help you.
2) Pay attention to your interviewer. If she asks a question about something you just said, there’s probably a hint buried in there. Don’t worry about losing your train of thought—drop what you’re doing and dig into her question.
Relax. You’re supposed to get stuck.
Interviewers choose hard problems on purpose. They want to see how you poke at a problem you don’t immediately know how to solve.
Seriously. If you don’t get stuck and just breeze through the problem, your interviewer’s evaluation might just say “Didn’t get a good read on candidate’s problem-solving process—maybe she’d already seen this interview question before?”
On the other hand, if you do get stuck, use one of these tricks to get unstuck, and communicate clearly with your interviewer throughout...that’s how you get an evaluation like, “Great problem-solving skills. Hire.”

3) Fixing Impostor Syndrome in Coding Interviews
“It's a fluke that I got this job interview...”
“I studied for weeks, but I’m still not prepared...”
“I’m not actually good at this. They’re going to see right through me...”
If any of these thoughts resonate with you, you're not alone. They are so common they have a name: impostor syndrome.
It’s that feeling like you’re on the verge of being exposed for what you really are—an impostor. A fraud.
Impostor syndrome is like kryptonite to coding interviews. It makes you give up and go silent.
You might stop asking clarifying questions because you’re afraid they’ll sound too basic. Or you might neglect to think out loud at the whiteboard, fearing you’ll say something wrong and sound incompetent.
You know you should speak up, but the fear of looking like an impostor makes that really, really hard.
Here’s the good news: you’re not an impostor. You just feel like an impostor because of some common cognitive biases about learning and knowledge.
Once you understand these cognitive biases—where they come from and how they work—you can slowly fix them. You can quiet your worries about being an impostor and keep those negative thoughts from affecting your interviews.

Everything you could know

Here’s how impostor syndrome works.
Software engineering is a massive field. There’s a huge universe of things you could know. Huge.
In comparison to the vast world of things you could know, the stuff you actually know is just a tiny sliver:
That’s the first problem. It feels like you don’t really know that much, because you only know a tiny sliver of all the stuff there is to know.

The expanding universe

It gets worse: counterintuitively, as you learn more, your sliver of knowledge feels like it's shrinking.
That's because you brush up against more and more things you don’t know yet. Whole disciplines like machine learning, theory of computation, and embedded systems. Things you can't just pick up in an afternoon. Heavy bodies of knowledge that take months to understand.
So the universe of things you could know seems to keep expanding faster and faster—much faster than your tiny sliver of knowledge is growing. It feels like you'll never be able to keep up.

What everyone else knows

Here's another common cognitive bias: we assume that because something is easy for us, it must be easy for everyone else. So when we look at our own skills, we assume they're not unique. But when we look at other people's skills, we notice the skills they have that we don't have.
The result? We think everyone’s knowledge is a superset of our own:
This makes us feel like everyone else is ahead of us. Like we're always a step behind.
But the truth is more like this:
There's a whole area of stuff you know that neither Aysha nor Bruno knows. An area you're probably blind to, because you're so focused on the stuff you don't know.

We’ve all had flashes of realizing this. For me, it was seeing the back end code wizard on my team—the one that always made me feel like an impostor—spend an hour trying to center an image on a webpage.

It's a problem of focus

Focusing on what you don't know causes you to underestimate what you do know. And that's what causes impostor syndrome.
By looking at the vast (and expanding) universe of things you could know, you feel like you hardly know anything.
And by looking at what Aysha and Bruno know that you don't know, you feel like you're a step behind.
And interviews make you really focus on what you don't know. You focus on what could go wrong. The knowledge gaps your interviewers might find. The questions you might not know how to answer.
But remember:
Just because Aysha and Bruno know some things you don't know, doesn't mean you don't also know things Aysha and Bruno don't know.
And more importantly, everyone's body of knowledge is just a teeny-tiny sliver of everything they could learn. We all have gaps in our knowledge. We all have interview questions we won't be able to answer.
You're not a step behind. You just have a lot of stuff you don't know yet. Just like everyone else.

4) The 24 Hours Before Your Interview

Feeling anxious? That’s normal. Your body is telling you you’re about to do something that matters.

The twenty-four hours before your onsite are about finding ways to maximize your performance. Ideally, you wanna be having one of those days, where elegant code flows effortlessly from your fingertips, and bugs dare not speak your name for fear you'll squash them.
You need to get your mind and body in The Zone™ before you interview, and we've got some simple suggestions to help.
5) Why You're Hitting Dead Ends In Whiteboard Interviews

The coding interview is like a maze

Listening vs. holding your train of thought

Finally! After a while of shooting in the dark and frantically fiddling with sample inputs on the whiteboard, you've came up with an algorithm for solving the coding question your interviewer gave you.
Whew. Such a relief to have a clear path forward. To not be flailing anymore.
Now you're cruising, getting ready to code up your solution.
When suddenly, your interviewer throws you a curve ball.
"What if we thought of the problem this way?"
You feel a tension we've all felt during the coding interview:
"Try to listen to what they're saying...but don't lose your train of thought...ugh, I can't do both!"
This is a make-or-break moment in the coding interview. And so many people get it wrong.
Most candidates end up only half understanding what their interviewer is saying. Because they're only half listening. Because they're desperately clinging to their train of thought.
And it's easy to see why. For many of us, completely losing track of what we're doing is one of our biggest coding interview fears. So we devote half of our mental energy to clinging to our train of thought.
To understand why that's so wrong, we need to understand the difference between what we see during the coding interview and what our interviewer sees.

The programming interview maze

Working on a coding interview question is like walking through a giant maze.
You don't know anything about the shape of the maze until you start wandering around it. You might know vaguely where the solution is, but you don't know how to get there.
As you wander through the maze, you might find a promising path (an approach, a way to break down the problem). You might follow that path for a bit.
Suddenly, your interviewer suggests a different path:
But from what you can see so far of the maze, your approach has already gotten you halfway there! Losing your place on your current path would mean a huge step backwards. Or so it seems.
That's why people hold onto their train of thought instead of listening to their interviewer. Because from what they can see, it looks like they're getting somewhere!
But here's the thing: your interviewer knows the whole maze. They've asked this question 100 times.

I'm not exaggerating: if you interview candidates for a year, you can easily end up asking the same question over 100 times.
So if your interviewer is suggesting a certain path, you can bet it leads to an answer.
And your seemingly great path? There's probably a dead end just ahead that you haven't seen yet:
Or it could just be a much longer route to a solution than you think it is. That actually happens pretty often—there's an answer there, but it's more complicated than you think.

Hitting a dead end is okay. Failing to listen is not.

Your interviewer probably won't fault you for going down the wrong path at first. They've seen really smart engineers do the same thing. They understand it's because you only have a partial view of the maze.
They might have let you go down the wrong path for a bit to see if you could keep your thinking organized without help. But now they want to rush you through the part where you discover the dead end and double back. Not because they don't believe you can manage it yourself. But because they want to make sure you have enough time to finish the question.
But here's something they will fault you for: failing to listen to them. Nobody wants to work with an engineer who doesn't listen.
So when you find yourself in that crucial coding interview moment, when you're torn between holding your train of thought and considering the idea your interviewer is suggesting...remember this:
Listening to your interviewer is the most important thing.
Take what they're saying and run with it. Think of the next steps that follow from what they're saying.
Even if it means completely leaving behind the path you were on. Trust the route your interviewer is pointing you down.
Because they can see the whole maze.
6) How To Get The Most Out Of Your Coding Interview Practice Sessions
When you start practicing for coding interviews, there’s a lot to cover. You’ll naturally wanna brush up on technical questions. But how you practice those questions will make a big difference in how well you’re prepared.
Here’re a few tips to make sure you get the most out of your practice sessions.
Track your weak spots
One of the hardest parts of practicing is knowing what to practice. Tracking what you struggle with helps answer that question.
So grab a fresh notebook. After each question, look back and ask yourself, “What did I get wrong about this problem at first?” Take the time to write down one or two things you got stuck on, and what helped you figure them out. Compare these notes to our tips for getting unstuck.
After each full practice session, read through your entire running list. Read it at the beginning of each practice session too. This’ll add a nice layer of rigor to your practice, so you’re really internalizing the lessons you’re learning.
Use an actual whiteboard
Coding on a whiteboard is awkward at first. You have to write out every single character, and you can’t easily insert or delete blocks of code.
Use your practice sessions to iron out that awkwardness. Run a few problems on a piece of paper or, if you can, a real whiteboard. A few helpful tips for handwriting code:

Set a timer
Get a feel for the time pressure of an actual interview. You should be able to finish a problem in 30–45 minutes, including debugging your code at the end.
If you’re just starting out and the timer adds too much stress, put this technique on the shelf. Add it in later as you start to get more comfortable with solving problems.
Think out loud
Like writing code on a whiteboard, this is an acquired skill. It feels awkward at first. But your interviewer will expect you to think out loud during the interview, so you gotta power through that awkwardness.
A good trick to get used to talking out loud: Grab a buddy. Another engineer would be great, but you can also do this with a non-technical friend.
Have your buddy sit in while you talk through a problem. Better yet—try loading up one of our questions on an iPad and giving that to your buddy to use as a script!
Set aside a specific time of day to practice.
Give yourself an hour each day to practice. Commit to practicing around the same time, like after you eat dinner. This helps you form a stickier habit of practicing.
Prefer small, daily doses of practice to doing big cram sessions every once in a while. Distributing your practice sessions helps you learn more with less time and effort in the long run.
part -2 will be upcoming in another post !
submitted by Cyberrockz to u/Cyberrockz [link] [comments]

questioning (16 amab)

i relate more to wlw/mlm relationships than hetero ones. i recently applied for a programme and it had the non-binary option for the first time which made me question things. it’s accurate to say i don’t meet conventional male stereotypes but i can’t tell if i’m an effeminate guy or just not a guy at all. people used to mistake me for a girl as a kid so i cut all my hair off and kept it shaven but i started growing it out since last summer. i’m also intrigued by nail polish and really like long nails (and just breaking the gender binary in general) i openly cycled between pan, bi and ace (i’m bi) in year ten and my friends have kinda never let me live it down? i just don’t want to make any big claims while being uncertain but i also don’t want to avoid any change because of how uncertain i am? does it sound likely to any non binary people that gender identity is something i should look into?
submitted by abeillore to NonBinaryTalk [link] [comments]

Best IQ Options Trading Strategy Binary Options - YouTube Most powerful breakout strategy in binary option 2020 ... START TRADING BINARY OPTIONS WITH BINARY OPTIONS LOW ... BINARY OPTIONS TRADING - New Binary Options Trading ... BINARY OPTIONS STRATEGY Best Free Binary Options Strategy ... Binary To Decimal Conversion Program  Hindi  Basic ... The Binary Options Revealed Secret, Professor Binario do ... Binary Options Strategy - YouTube Binary Strings programming - YouTube BINARY OPTIONS TRADING - Strategy For Trading Binary ...

Binary Options Signals are provided to traders to notify them when a new trading opportunity is available. My signals are extremely easy to follow and only require the trader to check a few points: asset, execution time, direction and expiry time. Here at John Anthony Signals, I have developed an unique and complex system which will alert traders when a new trading opportunity is available ... Binary Options Programmers Get link; Facebook; Twitter; Pinterest; Email; Other Apps ; October 19, 2017 A calculadora do programador é uma aplicação bonita, simples contudo poderosa com os números do hex e do escaninho mostrados simultaneamente. Flat UI com design minimalista faz a sua aparência calmante, sem desordem visual Suporta sistemas binários Hex Decimal e octal Suporta todas as ... Fr om the buyer’s perspective, the main advantage of binary Automated Trading Programmers options Automated Trading Programmers trading Automated Trading Programmers is that the Risk taken is limited to the premium that the trader pays up front to take on a binary option position. So in above example, the Risk taken by the trader is limited to $100 in that particular position. The programmers have the huge default to think binary as they learned computer science as a data processing. If we treat the information binary, we are far from the Reality. The world isn't binary. If a programmer starts to think the trading like something binary, he loses the most important capacity of his brain. The cerebral plasticity. Binary Comodo is very easy to use pointer indicator for binary options. The indicator is not repainted, not delayed and shows good results in trading currency pairs on binary options. It includes 3 technical indicators and trading systems based on them. Performs the entire analysis of signals and mathematical calculation of the probability of ... for Binary Options. Let the pros show you when and what to trade as you take your first step towards financial success today. Activate Free 7 Day Trial Discover More. Activate 7 Day FREE Trial Now! Fill out the simple form to the right and we will send you your 7 day FREE trial activation link immediately. About our Signals. BOPS trading signals are the easiest way to make even the newest ... The software will normally recommend binary options brokers to open an account and deposit with. Programme The Software. We don't mean that you need to be a programmer to operate the software, but you do need to tell it what you want. Set your technical indicators which will include your investing limits, frequency etc. then leave the rest to the auto trader software. Sit Back, Relax And Enjoy ... And most people that talk about these binary Best Forex Programmers signals services they have not even tried Best Forex Programmers them. Quantum binary signals is good but it sends fewer binary options alerts. Binary options pro signals service sends more.But the most important is the success rate.In my opinion the best one is Franco’s service as you can read in my Binary options trading ... the attached indicator generates put and call signals for binary options. I want to make me a program, so that when this indicator generates a call or a put for binary options, it places the order directly in the platform for binary options (web trader) of the site https://gcoption.com. It is a free SAAS (Software As A Service) that lets you receive free Binary Options and Cryptocurrency Trading Signals from 3rd Parties. Traders from all over the world including USA can trade with this trading software. This trading system is claiming that – behind this auto trading software, a group of expert and experienced traders are working with the best programmers. The establishment ...

[index] [8310] [18406] [16890] [29168] [23762] [7460] [24554] [9336] [15283] [29583]

Best IQ Options Trading Strategy Binary Options - YouTube

Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. #ProfessorBinario #IqOption #ExpertOption #DerivCheck out my website (https://adoniasmoney.com/brasil)The biggest Course that teaches traders how to make $ 2... Hey guys!! What's up? I hope you liked this video if yes give my video a like👍 and subscribe. If you find it helpfull then write in the comments. Write your ... # Open IQ Option Demo Account https://urlzs.com/q6kvy! Totally Free 10000$ Demo account!Join Facebook Group: https://www.facebook.com/groups/21656...Telegra... BINARY OPTIONS TRADING - Strategy For Trading Binary Options ★ TRY STRATEGY ON DEMO http://iqopts.com/demo ★ WORK ON REAL MONEY http://iqopts.com/registe... BINARY OPTIONS TRADING - New Binary Options Trading Strategy ★ TRY STRATEGY HERE http://iqopts.com/demo ★ WORK ON REAL MONEY http://iqopts.com/register ★... Free demo account https://clck.ru/RrWGe _____ binary options,binary options strategy,binary o... I Do Not Own Copyrights To Music! In today’s video I will be showing you guys A super easy 4 step profitable binary options strategy. It works for Forex Trad... https://tradebnryoptions.com Invest in binary options trading starting with a binary options low minimum deposit! Best Binary Options Brokers 2019 Reviews! In this video we have explained the programming logic for binary to decimal conversion of a number. We have explained the logic behind it and we also have ex...

https://binaryoptiontrade.highsofli.cf