Tmux is a terminal multiplexer for Unix-like systems. Similar to Linux Screen, Tmux allows you to create, manage, and easily switch between multiple terminal sessions within one single terminal window or console. It also has features such as the ability to detach and reattach sessions, split terminal windows into panes, and more. It is useful for managing multiple terminal sessions and for running long-running commands in the background while you do other work in the same terminal window.
One difference between the Tmux and Screen is that Tmux is more modern and has a more user-friendly interface, with support for mouse operations and better window and pane management. Screen, on the other hand, is an older tool that is more lightweight and simple, and does not have as many features as Tmux.
Another difference is that Tmux is more configurable and extensible, with support for custom scripts and plugins, while Screen is more bare-bones and does not have as much support for customization.
Ultimately, the choice between Tmux and Screen depends on your personal preferences and needs. Both are powerful tools that can be useful in different situations.
To learn more about the usage of Screen, please read another post in this blog: The Element of Linux Screen.
brew install tmux
tmux new
: start a session, the session gets an automatically generated name.tmux new -s <session name>
: start a session with specified name.tmux ls
tmux kill-session
: kill the last session.tmux kill-session -t <session name>
: kill the session with specified name.tmux attach-session
: attach to the last session.tmux attach-session -t <session name>
: kill the session with specified name.tmux a
: shortcut of tmux attach-sessionWhen you start a new Tmux session, by default, it creates a single window with a shell in it. You can create more windows and switch between them.
Ctrl+b c
: Create a new windowCtrl+b w
: Show window list and choose from itCtrl+b n
: Move to the next windowCtrl+b p
: Move to the previous windowCtrl+b 0
: Switch to window 0Ctrl+b ,
: Rename the current windowCtrl+b &
; Kill the current windowThe book contains four parts. Part 1 is a strong opening, in this part the author discussed the key concepts that form the foundation of modern product work, the core principles that behind great products, and possible causes of the failed product efforts. Part 2 are sections on product teams, different roles that a successful product team needs, and how each role should work in order to lead to product success. Part 3 are sections on principles and techniques of product roadmaps, product vision, and objectives. I find this part especially valuable. Part 4 describes the right process from product discovery to delivery. In the last part, the author shared his view on the right culture that great products rely on.
Here I’ll jog down five notes covering a few topics discussed in this book.
Behind every great product there is someone - usually someone behind the scenes, working tirelessly - who led the product team to combine technology and design to solve real customer problems in a way that met the needs of the business.
In a startup, the product manager role is usually covered by one of the co-founders. Typically, there are fewer than 25 engineers, covering a range of from one product team up to maybe four or five.
Two inconvenient truths about product:
One of the most important things about product that I’ve learned is that there is simply no escaping these inconvenient truths, no matter how smart you might be.
Three overarching principles at work:
The purpose of product discovery is to address these four critical risks:
Good product strategies have these five principles in common:
This documentary has two seasons. In total it has fourteen episodes and each one of them introduces one designer in a different fields, for example graphic design, footware design, architecture, automotive, digital product, and more. I feel each episode is made in a way that is customized to the designer: it tells their stories, shows their work and speaks their minds in a particular way they choose. You can watch the official trailer here to get a feeling about it, and on this wiki you can find more information about the designers get highlighted in each episode.
Many things I like about it, to describe it in a few short words: creative, insightful, thoughful, and encouraging.
Netflix has put this series on Youtube so you can watch it for free now.
]]>Important things to consider:
When you work on designing a CD platform, pay attention to where do you focus on. When I joined Pinterest back in 2019, I was working on the Continuous Delivery Platform team initially and I started designing Pinterest’s new CD platform. For that project, the team and I put a lot of focus on the developer experience and making the new system easy to use. While I think that was good, I also think we should have put more thoughts into the other areas such as credentials management, autoscaling, deploy policy support, different ways of triggering a deployment, in the design phase.
Benefits of flexible user-defined pipelines: allowing each team to build and maintain their own deployment pipeline from the building blocks the platform provides lets engineers experiment freely according to their needs.
Encapsulate the built-in features as platform defined pipeline stages:
Continuous delivery is a complex process. I think using pipeline and stage as the two core concepts in Spinnaker’s design is an awesome idea. It abstracts the complexity of various type of deployments, and allows enough flexibility and extensibility by having both the managed stages and customized stages. On a side note, Apache Airflow, the data pipeline orchestration system uses a similar principle in its design by providing operators. I may delve into the design of Airflow in another post later.
For continuous deployment into Amazon’s EC2 virtual machine–based cloud, Spinnaker models a well-known set of operations as pipeline stages. Other VMbased cloud providers have similar functionality. Those operations mainly include:
Kubernetes makes deployment to the cloud much easier because of some of its advantages comparing to the VM-based cloud platform:
Pinterest has an internal deployment system Teletraan, and it was designed and used for deploying services to VMs (Amazon EC2). After joining Pinterest, I learned that a major user pain point of Teletraan was the complicated configurations needed for setting up a deployment environment for user’s services. For example, in Teletraan users need to configure AMI, place AZs, etc. I agree that Teletraan’s UI can be improved to make the experience better, but I think this problem is a result of the complexity of VM deployment. To solve it, we could either choose to move to Kubernetes, or redesign some parts of Teletraan to have a better abstraction model, similar to what Spinnaker did. In short, reducing the complexity in developer experience cannot be done just by fixing the UI.
Right now I’m no longer working on the Continuous delivery platform, but I still like to think about the problems and solutions in this area because I used to work on it, built a new system from scratch when I didn’t really have much knowledge in the cloud deployment space, had many questions and learned lessons (and got some wins too). If you happen to read this and want to continue the discussion with me on a specific topic, please feel free to drop me a line.
]]>Overall I find this book is easy to read as long as you have some knowledge in JavaScript. In this book, a few key concepts in D3.js are clearly laid out, and the examples cover a good set of common usages and tactics you need to know for building data visualization features using D3.js.
This book has 11 chapters. Chapter 1, 2 and 3 are introduction to D3.js, the high level flow and common operations of using D3.js for information visualization, and how to structure a data visualization project with D3.js.
In the first three chapters, alongside with the basic concepts, a few tactics I find worth learning from the very beginning are:
Integrate scales in data binding: D3.js provides handy scale functions to normalize data values for better display. Example built-in scale functions include: d3.scaleLinear()
, d3.scaleSequential()
, d3.scaleQuantize()
and so on. A D3 scale has two primary functions: ..domain()
and .range()
, both of which expect arrays and must have arrays of the same length to get the right results. The array in .domain()
indicates the series of values being mapped to .range()
.
Enter, update, merge, and exit to update DOM elements: Understanding how to create, change, and move elements using enter()
, exit()
, and selections is the basis for all the complex D3 functionality. One note here is D3 doesn’t follow the convention that when the data changes, the corresponding display is updated; you need to build that functionality yourself.
Getting access to the actual DOM element in the selection can be accomplished in one of two ways:
this
in the inline functions (cannot be used with arrow functions).node()
functionUsing this
1 2 3 |
|
Using .node()
function
1
|
|
Chapter 4, 5, 6, 7 and 8 introduce the methods and details of building specific types of visualization for specific types of data: chart components, layouts, complex hierarchical data visualization, network visualization, and visualizing geospatial information.
d3.layout.histogram()
, d3.layout.pie()
, d3.layout.tree()
etc.Chapter 9 covers how to using D3 with React. The challenge of integrating D3 with React is that React and D3 both want to control the DOM. The entire select/enter/exit/update pattern with D3 is in direct conflict with React and its virtual DOM. The way most people use D3 with React is to use React to build the structure of the application, and to render traditional HTML elements, and then when it comes to the data visualization section, they pass a DOM container (typically an <svg>
) over to D3 and use D3 to create and destroy and update elements.
Chapter 10 and 11 are advanced usages about customizing layouts and components, and mixed mode rendering in HTML canvas.
]]>One thing I did when learning Golang was creating a mind mapping diagram. The mind map helps me to organize the different topics of Golang that I need to learn about, and to dig into each part without getting lost in too many details. It also makes it much easier to remember information.
If you are also learning Golang, you can take a look at the Golang mind map here on my github. It mainly covers Golang syntax, flow control, data structures, methods, functions, interfaces and basic concurrency. One thing it doesn’t have yet is Go Modules, which was added in Go 1.11 (released in August 2018). As Go dev team announced, current module support is priliminary. In Go 1.12, scheduled for February 2019, they will refine module support. I will update this mind map to add Go Modules then.
Lastly, 2019 is around the corner. Happy New Year!
]]>Before I started to read this book, I had three questions in my mind, and try to find the answers from the book. Those three questions are:
This book does give me the answers, at least partial ones. I put my reading notes into a Google Slides, and you can find it here to read the details. A PDF version in light background color is available here.
The short answers to my questions are as in the following:
The book describes three types of patterns.
You can find more detailed description of each design pattern in my reading notes.
]]>Performance Monitor is a small utility provided by Windows OS, you can start it by running command perfmon
. With perfmon, you can monitor real-time system performance, and record performance to files for post analysis. This tool provides some extremely useful interfaces in its GUI.
To view current performance activity, you just need click on the Performance Monitor button in the left panel:
By default, this view has only one performance counter: % CPU Processor Time
. You can add more counters you need, such as Processor’s Idle Time, Cache Performance, Network Performance and a lot more.
When analyzing an application’s performance, we often need record all the performance data and generate various reports to help analysis. We can do this in perfmon by adding User Defined Data Collector Sets (from Menu Action -> New -> Data Collector Sets).
Perfmon allows you to choose a template to start with, and specify the location where the performance data will be saved. The process is quite straightforward as provided in the GUI. There is only one thing that you need pay attention to: the Stop Condition. By default, a newly created Data Collector Sets has “stop condition” as “Overall duration: 1 minute”. With this condition set, the performance recording will stop in 1 minute after starting. If the process you are monitoring takes longer than 1 minute to finish, you definitely want to increase this “Overall duration” to some longer time.
With the added Data Collector Sets, you can start recording before running your application, and stop recording any time you want. The recorded data will be shown in the Reports session in the left panel. The report can also be viewed as graphs in the Performance Monitor.
The following is one example of displaying performance report in Stacked Area Graph. The other graph types you can choose are: Line, Histogram bar, Area.
Windows Performance Recorder (WPR) is a performance recording tool that is based on Event Tracing for Windows. It is available for Windows 8 or later. It records system events that you can then analyze by using Windows Performance Analyzer (WPA). This tool is included in the Windows Assessment and Deployment Kit (Windows ADK), and you can download it here.
When WPR starts, it will guide you to choose a few configurations: profiles, scenario, details level and logging mode. You can follow the instructions here on Microsoft Docs to decide how to choose for your needs.
Then you can start recording performance by clicking the “Start” button. The recording will end when you click the “Save” button or “Cancel” button. If “Save” is clicked, the performance data will be stored to files, and Windows Performance Analyzer (WPA) will be automatically launched to show the performance reports.
WPA provides detailed performance analysis data in its rich user interface. In the left “Graph Explorer”, you can choose to view performance graphs for System Activities, Computation, Storage, Memory, and Power. To see the graphs, just drag the graph to the “Analysis” tab on the right side.
Comparing to Performance Monitor (perfmon), WPA reports give you more details and flexibility to explore the data.
This graph is a process lifetime graph generated by WPA.
WPA supports loading symbols so you can see more details of each process or command. The paths of symbols can be added either from UI, or by setting environment variable _NT_SYMBOL_PATH
. Read this instruction if you need understand how to load symbols or configure symbol paths in WPA.
Xperf is a command-line tool for performance recording on Windows. It is also included in the Windows Assessment and Deployment Kit (Windows ADK). Starting from Windows 8, WPR becomes the recommended tool for performance recording, the support is still maintained for Xperf though.
Xperf works in a similarly way as WPR. It doesn’t have a GUI, but provides about ten command line options to process performance recording. The most commonly used ones probably are just start
and stop
.
You can simply start Xperf performance recording using this command:
1
|
|
When recording is done, the generated *.etl
file can be opened and viewed in WPA.
Lastly, I’d like to introduce a light-weight tool Process Explorer, aka. procexp. Process Explorer is included in Windows’ Sysinternals Process Utilities.
Process Explorer provides a CPU performance monitor. Comparing to the CPU monitor in Task Manager, this one has enhanced features for you to monitor CPU utilization of each core and each thread. You can view a graph for each CPU.
]]>Jenkins, originally founded in 2006 as “Hudson”, is one of the leading automation applications which support building, deploying and automating software projects. One great advantage of Jenkins is there are hundreds of plugins available which enable various kinds of extended features needed in the Continous Integration and Continuous Delivery process. As I just checked on the Jenkins Plugins page, there are 873 plugins that fall into five categories: Platforms, User interface, Administration, Source code management and Build management.
Effectively using Jenkins plugins makes your experience with Jenkins more productive. I’m going to occasionally write about Jenkins plugins that I used or learned about. The first post will start from some of the plugins I used when I worked on building a Continuous Delivery system last year (from 2015 to 2016).
This plugin saves every change made to a job. It allows you to see the history of job configurations, compare configuration differences, and restore a particular version of config. You can also see which user makes a particular change if you configured a security policy.
The configuration changes are saved by means of saving copies of the configuration file of a job (config.xml in Jenkins Home directory).
This plugin visualize dependencies of multiple jobs by generating graphs via graphviz. You can choose to show the dependency of jobs in a view. To generate the graph, it is required to have graphviz installed on the Jenkins server.
This plugin is very useful when you have many jobs which have dependency relationship. Visualizing the dependency helps you easily find possible mistakes in the setting of dependencies.
This plugin allows you to set runtime limit of jobs, and automatically abort a build if it’s taking longer than expected time. In my experience, this plugin was extremely useful as it solved the problem that some builds got stuck and didn’t release the Jenkins slave slots.
Noted this plugin isn’t applicable to pipelines.
This plugin manages Perforce workspaces, synchronising code and polling/triggering builds. It also supports a few common Perforce operations such as credential authentication, changelists browsing, and labeling builds.
This plugin integrates JIRA to Jenkins. It uses JIRA REST API, and allows you to display Jenkins builds inside JIRA.
This plugin lets you trigger new builds with various ways of specifying parameters for the new builds. The parameters could be a set of predefined properties, or based on information/results of the upstream builds.
As an example, you can tell a build job where to find packages it needs to install.
This plugin parses the console log generated by the Jenkins build. It could highlight lines of interest in log, like the lines with errors
, warnings
, information
. It divides a log into sections, such as errors section, warnings section, etc. The number of errors or warnings are also displayed. Useful for triaging errors in long build logs.
This plugin extends the email notification functionality that Jenkins provides. You can customize when an email is sent, who should receive it, and the content of the email.
This plugin calculates disk usage of projects and builds, and shows the disk usage information on a page. It also displays a trend chart of display usage. It makes Jenkins job and workspace maintenance easier.
This plugin backs up the global and job specific configurations. You can see the backup history, and choose to restore a particular backup. The backup provides setting options for the backup schedule, backup directory, maximum number of backup sets, etc.
This plugin allows you to edit, store, and resue groovy scripts, and execute the script on any of the slaves or nodes. But since 2016 the distribution of this plugin has been suspended due to security issues. The current version of this plugin may not be safe to use.
An alternative choice is the Managed Scripts Plugin.
]]>It’s easy to understand what decorators are, while the real question you may have is: Why decorators are useful? When shall I use decorators in my Python program?
In some way, I see decorator functions are useful whenever you need process or extend the inputs or outputs of a given function (or more often, multiple functions) in some way you want. Here I list three usages of decorators that I can think of:
Usually the extended functionalities are for some kind of enhancement, format changing, or temporary usage. In other words, you are adding some functionalities without touching the core logic of the original functions. A few common use cases:
When you have some functions which are possibly called for many times with the same input, you can write a decorator function that stores a cache of inputs and outputs of a given function. In this way, the function doesn’t need to re-compute everything each time, and make it faster to run it multiple times. This is related to the Memoization technique.
You can use decorator functions to process exceptions. One example is supressing particular types of system exceptions raised by the target function. Another thing you can do is catching all exceptions raised by a function, prompt the user to ask what the program should act.
Now let me use two examples to describe the syntax of decorators.
This example comes from a good answer on Stack Overflow, by user RSabet.
A decorator function time_dec
tells you how long it takes to finish a function.
Python has a shortened syntax for using decorators which allows us to wrap a function in a decorator after we define it. This shortened syntax is syntactic sugar @decorator_function
.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Note the syntactic sugar @time_dec
was used. It causes Python to rebind the function name myFunction
as:
1
|
|
This example shows how we can add caching to the calculation of prime numbers.
A decorator function memoize
is used to store inputs and calculate outputs of the original function is_prime
.
The second time when you call function is_prime
with the same input number, it runs much faster than the first time.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Python also has several built-in decorators, and you might see them before you know the term decorator. The built-in decorators are mainly used to annotate methods of a class: @property
, @classmethod
, @staticmethod
.
@property: transforms a method function into a descriptor. When applied to a method, it creates extra properties objects: getter
, setter
, and deleter
. By using @property
, we can access a method as if it was an attribute.
@classmethod: transforms a method function into a class-level function.
@staticmethod: transforms a method function into a class-level function, and neither the object instance nor the class is implicitly passed as the first argument.
As decorators are just ordinary functions and the decorator syntax is just a syntactic sugar, you can easily turn any Python built-in function to a decorator if it makes sense to use it that way.
One more thing, you may want to take a look at this PythonDecoratorLibaray page. It collects a number of decorator examples and code pieces.
]]>In this post I’ll write a brief summary of two profiling methods: Instrumentation and Sampling, and four CPU profiling tools on Linux: perf, gprof, Valgrind and Google’s gperftools.
Different profiling methods use different ways to measure the performance of an application when it is executed. Instrumentation and Sampling are the two categories that profiling methods fall into.
Instrumentation method inserts special code at the beginning and end of each routine to record when the routine starts and ends. The time spent on calling other routines within a routine may also be recorded. The profiling result shows the actual time taken by the routine on each call.
There are two types of instrumenting profiler tools: source-code modifying profilers and binary profilers. Source-code modifying profilers insert the instrumenting code in the source code, while the binary profilers insert instrumentation into an application’s executable code once it is loaded in memory.
The good thing of instrumentation method is it gives you the actual time. The inserted instrumentation code (timer calls) take some time themselves. To reduce the impact of that, at the start of each run profilers measure the overhead incurred from the instrumenting process, and later subtract this overhead from the measurement result. But the instrumenting process could still significantly affect an application’s performance in some cases, for example when the routine is very short and frequently called, as the inserted instrumentation would disturb the way the routine executes in the CPU.
Sampling measures applications without inserting any modifications. Sampling profilers record the executed instruction when the operating system interrupts the CPU at regular intervals to execute process switches, and correlates the recorded execution points with the routines and source code during the linking process. The profiling result shows the frequency with which a routine and source line is executing during the application’s run.
Sampling profilers causes little overhead to the application run process, and they work well on small and often-called routines. One drawback is the evaluations of time spent are statistical approximations rather than actual time. Also sampling could only tell what routine is executing currently, not where it was called from. As a result, sampling profilers can’t report call traces of an application.
The perf tool is provided by Linux kernel (2.6+) for profiling CPU and software events. You can get the tool installed by:
perf
is based on the perf_events system, which is based on event-based sampling, and it uses CPU performance counters to profile the application. It can instrument hardware counters, static tracepoints, and dynamic tracepoints. It also provide per task, per CPU and per-workload counters, sampling on top of these and source code event annotation. It does not instrument the code, so that it has a really fast speed and generates precise results.
You can use perf
to profile with perf record
and perf report
commands:
1 2 |
|
The perf record
command collects samples and generates an output file called perf.data
. This file can then be analyzed using perf report
and perf annotate
commands. Sampling frequency can be specified with -F
option. As an example, perf record -F 1000
means 1000 samples per second.
GNU profiler gprof tool uses a hybrid of instrumentation and sampling. Instrumentation is used to collect function call information, and sampling is used to gather runtime profiling information.
Using gprof
to profile your applications requires the following steps:
-pg
optiongmon.out
gprof
command to analyze the profile data1 2 3 |
|
The gprof
command prints a flat profile and a call graph on standard output. The flat profile shows how much time was spent executing directly in each function. The call graph shows which functions called which others, and how much time each function used when its subroutine calls are included. You can use the supported options listed here to control gprof
output styles, such as enabling line-by-line analysis and annotated source.
Valgrind is an instrumentation framework for building dynamic analysis tools. Valgrind distribution includes six production-quality tools that can detect memory issues and profile programs. Callgrind, built as an extension to Cachegrind, provides function call call-graph. A separated visualisation tool KCachegrind could also be used to visualize Callgrind’s output.
Valgrind is a CPU emulator. The technology behind Valgrind is Dynamic binary instrumentation (DBI), whereby the analysis code is added to the original code of the client program at run-time. The profiling tool Callgrind is simulation based, it uses Valgrind as a runtime instrumentation framework. The following two papers explain how Valgrind and Callgrind work in detail.
You need use the following commands to profile your program with valgrind
:
callgrind.out.<pid>
callgrind_annotate
or kcachegrind
tool1 2 3 |
|
gperftools, originally “Google Performance Tools”, is a collection of tools for analyzing and improving performance of multi-threaded applications. It offers a fast malloc, a thread-friendly heap-checker, a heap-profiler, and a cpu-profiler. gperftools was developed and tested on x86 Linux systems, and it works in its full generality only on those systems. Some of the libraries and functionality have been ported to other Unix systems and Windows.
To use the CPU profiler in gperftools, you need:
-lprofiler
CPUPROFILE
, then run the applicationpprof
commandsInclude gperftools header files in your source file:
1
|
|
Link with -lprofiler
, profiler
is in the installation directory of gperftools
:
1
|
|
Set CPUPROFILE environment variable, which controls the location of profiler output data file:
1
|
|
Run pprof
commands to analyze the profiling result:
1 2 |
|
Sharing on Facebook started from largely text, and quickly changed to be largely photos. Since 2014, more videos started to be posted and shared among users. The challenge was, building a video processing system is much harder than building a text or image processing system. Videos are greedy, they will consume all your resources: CPU, memory, disk, network, and anything else.
Before building the Streaming Video Engine system, the team started by reviewing Facebook’s existing video uploading and processing process, which was slow and not scalable. They found several problems need change or improvement:
The new Streaming Video Engine (SVE) is expected to solve the aforementioned problems, and to meet the four design goals:
These four design goals, in my opinion, are also the most common goals applicable to most engineering infrastructure systems.
Let’s take a deep dive to see how SVE was designed to meet these goals.
With this design, the process speedup reached 2.3x (small videos < 3MB) ~ 9.3x (large videos > 1G).
SVE achieved 20% smaller video file sizes. This is a huge saving of user’s data plans.
This Streaming Video Engine was designed, coded and tested in roughly 9 months. The most important learnings are:
Instagram’s backend, which serves over 400 million active users every day, is built on Python/Django stack. The decision on whether moving from Python 2 to Python 3, was really a decision on whether investing in a version of the language that was mature, but wasn’t going anywhere (Python 2 is expected to retire in 2020) – or the language that was the next version and had great and growing community support. The major motivations behind Instagram’s migration to Python 3 are:
The whole migration process took about 10 months, in roughly 3 stages.
In the talk, Lisa shared the challenges they faced in the migration process and how did they solved those problems.
map
. Solved by converting all maps to list in Python 3.sorted_keys
in json.dump
function.True
because of unicode issue. Solved by adding a magical character “b”, just like this:In Feb 2017, Instagram’s stack completely dropped Python 2 and moved to Python 3 (v3.6). So far they’ve got this from Python 3:
One more thing, in the talk Hui Ding also briefly discussed a few Python Efficiency Strategies that Instagram used to support the growing number of features and users:
Changing an existing service to use a new version of language can never be easy, especially when your service is at such a scale - serving millions of people. You just cannot afford to breaking the existing service. Moving to Python 3 in 10 months must be a challenging process. “It can be done. It worths it. Make it happen. And Make Python 3 better.”
Nice work Instagram!
]]>1
|
|
In the end I fixed these linker errors by using TARGET_LINK_LIBRARIES
command in the project’s CMakefile to specify the linker package dependency, as the following:
1
|
|
When I was looking for solutions to fix those linker errors, I found several related CMake commands which look quite similar and could be confusing in terms of their functions and when to use them. Here is a quick summary of these commands.
Related CMake commands:
Usage: add_dependencies(<target> ...)
ADD_DEPENDENCIES adds a dependency between top-level targets. It makes a top level target depend on other top level targets to ensure that the dependents build beforehand. This command doesn’t ensure CMake to find the path to the targets though.
Usage: link_directories(directory1 directory2 ...)
LINK_DIRECTORIES specifies directories in which the linker will look for libraries. This command will apply only to targets created after it is called. This command is rarely necessary. You can always pass absolute paths to target_link_libraries() command instead.
The function of this command is similar to -L
option in g++. It is also similar to adding the specified directories to environment variable LD_LIBRARY_PATH
.
Usage: link_libraries([item1 [item2 [...] ]])
LINK_LIBRARIES specifies link libraries or flags to use when linking all targets added later by commands such as add_executables()
or add_library()
.
This command was deprecated in CMake version 3.0, and was added back in version 3.2. But CMake document recommends using target_link_libraries
to replace this command whenever possible.
The link libraries specified in this command are expected to be full paths.
Usage: target_link_libraries(<target> ... <item> ...)
TARGET_LINK_LIBRARIES specifies libraries or flags to use when linking a given target and/or its dependents. The specified target must be created by add_library()
within the project or as an imported library.
最近几个月来微信最引人注目的功能变化,无疑是“微信小程序”的推出和初步推广。微信小程序,英文版名称为Mini Program,其实是微信推出的类似于React和Weex的以Javascript为基础的程序框架。微信小程序为开发者提供了各种UI组件和调用底层的API。通过小程序实现的App,无需安装就可以在微信平台使用。
微信小程序的入口,设置在微信底部4个固定Tab之一的”发现“菜单中。从功能上来看,微信小程序的入口类似于Apple App Store,但它的设计和App Store完全不同。下面是一些比较。
用户第一次进入微信小程序的入口时,看到的有且仅有一个搜索框,其它就是空白。这显著不同于进入App Store后看到的各种玲琅满目的应用和排行榜。
通过搜索(或者其他二维码)入口找到一个小程序后,一次点击就直接开始了使用过程,没有安装,也没有App启动界面和过程。第一次和小程序交互时甚至会不习惯这个没有暂停的过程。我特别注意到的是,这个点击即使用的过程,也会影响到小程序App本身的入口设计,比如用注册登录作为主入口就不合适了,小程序App需要在最短的时间内提供有效的信息和功能给用户。
退出一个使用中的小程序,是通过微信左上方的后退按钮。退出之后即回到搜索结果的页面。感觉上,好像刚刚什么都没有发生。而使用过的小程序,都会被保存在入口主界面搜索框下的列表中。列表可以手工逐项清除,删除方式和删除微信对话一样,左滑屏幕,选择删除。
微信小程序入口的这些特点,可以说是再一次充分展现了微信产品设计一直以来的风格,也即张小龙曾经讲过的关于微信背后的产品观。根据我所观察到的,总结一下有:
(微信的设计部分,引用自张小龙谈微信产品观;小程序的部分是我的总结)
现在微信小程序的热度还不高,处于一个开发者仍在观望的状态,加之最近Apple的封锁打赏事件,更让不少唱衰小程序的论调流传起来。而无论小程序未来发展会如何,我在体验它之后立刻感受到设计的用心。在互联网和移动产品的茫茫大海之中探索,我发现,有些产品会让你觉得不明白设计者在想什么(可能是不存在),有些产品让你想吐槽设计者怎么连这个都想不到,有些产品让你觉得设计者好机智,而还有一些产品,会让你为设计者的用心良苦而感动。对我来说,微信是属于最后一类产品,虽然它其貌不扬(也不洋,挺土的)。
2017-4-24 补充:
一个产品的成功当然不仅仅是由设计决定的,而产品的设计也绝不仅仅限于UX的范围。考虑几个简单的问题:
一个成功产品的设计值得分析,一个原因是成功产品的设计通常不差,更重要的是它已经成为了相当多数量的人们习惯的方式。无论你是想要迎合人们的习惯,或改变人们的习惯,观察和分析明星产品都是必要的。
]]>Two types of String are available in C++: C-Strings (C-style Strings), and STL Strings.
C-String is a fundamental type in C++. Comparing to STL String, C-String is small, simple and fast. A C-String is a special case of an array of characters terminated with a 0. This is sometimes called an null-terminated string. A C-String can be printed out with a printf
statement using the %s
format string. We can access the individual characters in a C-String just as we do in an array.
Example: print a C-String with %s
1 2 3 |
|
Example: access characters of a C-String
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Example: access characters of a C-String, using a pointer
1 2 3 4 5 6 7 8 9 10 |
|
Example: access characters of a C-String, C++ 11 style
In C++ 11, a range based loop can be used to access arrays and also C-Strings.
1 2 3 4 5 6 7 8 9 10 11 |
|
You may have noticed that the null
character in the end of the C-String was printed out in the above code snippet. This is because the range based for loop in C++ 11 looks at the entire array and doesn’t treat the null
as the end of the C-String. To get rid of the ending null
character in a C-String, we need add a condition checker inside the range based loop.
1 2 3 4 5 6 7 8 9 10 11 |
|
The STL String class is a special type of container designed to operate with sequence of characters. It’s designed with many features and available functions to operate on strings efficiently and intuitively. To use STL String, you need include string
header. The following example shows the basic usage of STL string including getting the length of a string, string concatenation, comparison, and accessing each character.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
System-on-Chip design and verification process is a complicated one. Unlike the world of Web and Internet, the design and development of hardware products have higher risk and lower tolerance to any mistakes. SoC design and verification process requires collaborations from multiple teams and vendors. Lots of hard decisions to make. Lots of trade-offs to consider. Moreover, the nonrecurring-engineering (NRE) charge makes sufficient and solid verification a must with limited time and resource. Tools and automated flows are an essential part of any design house.
Here is a list of areas that need tools and flows for SoC software and hardware design and verification according to my experience.
Usage Area of Tools/Flows | Software | Hardware | Design Usage | Verification Usage |
---|---|---|---|---|
Test Generation | x | x | x | |
Regression System | x | x | x | |
Coverage Reporting | x | x | x | |
Coding Style Check | x | x | x | |
Code Review System | x | x | x | |
Code Quality Analysis | x | x | x | |
Build System | x | x | x | x |
Version Control | x | x | x | x |
Integration System | x | x | x | |
Spec System | x | x | ||
RTL Generation | x | x | ||
TestBench Generation | x | x | ||
Synthesis | x | x | ||
Netlist Quality Analysis | x | x | ||
Power Analysis and Optimization | x | x | ||
ECO Flow | x | x | ||
Issue/Bug Tracking System | x | x | x | x |
Infrastructure: Linux/Windows machines, LSF | x | x | x | x |
A pointer holds the address of a variable and can be used to perform any operation that could be directly done on the variable, such as accessing and modifying it. Here are a few facts of pointers:
When a pointer is defined, memory is allocated in the size of a pointer.
The pointer is strongly typed, meaning the compiler retains an association with a pointer that it points to a type of value.
Two pointers can equal to each other, such that changing one’s value also changes the other’s value.
1 2 3 4 5 |
|
The size of a pointer varies depending on the architecture: 32 bits on a 32-bit machine and 64 bits on a 64-bit machine.
Pointer subtraction is allowed. The result of pointer subtraction is the distance of two pointers.
1 2 3 4 5 |
|
1 2 3 4 5 |
|
A reference is another name for a pre-existing object. It does not have memory of its own. In other words, a reference is only an alias. A few facts about references are:
1 2 |
|
1
|
|
A reference is immutable. You cannot reassign a reference to another piece of memory.
When you use references in function calls and class method calls, you always want to make them const. This helps to eliminate the side effects of using references (because using reference sometimes is not obvious as using pointers, and people may not notice the unintended side effects could happen). The following example shows the possible side effects when using references:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
The good way is always using const
when using references:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
By default, functions in C++ pass variables by value, which means that a copy of the value is made and that copy is used inside the function. This is called pass by value. However, passing references or pointers does the same thing and faster as the copying is skipped. Actually this is why references are created for C++, to allow call by reference so that you can pass large objects without worrying about stack overflow.
Before references, this can be done with pointers. Pass by pointers can do the same thing but it’s a little bit more complicated than using references.
Example of a “call by reference”:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
The first topic I’ll write about here is: Qualifiers
C++ uses Qualifiers to adjust qualities of a variable or an object. In C++, there are two types of qualifiers: CV qualifiers and storage qualifiers.
CV qualifiers stands for Const and Volatile Qualifier. There are three types of CV qualifiers:
const
marks a variable or function as read-only or immutable. It’s value (or the return value of a function) cannot be changed once it’s been defined.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
volatile
marks a variable that may be changed by another process. This is generally used for threaded code, or externally linked code. Often volatile
is used to tell the compiler avoid aggressive optimization involving the qualified object because the value of the object might be changed by means that the compiler is not aware of.
1 2 3 4 5 6 |
|
mutable
is used on data member to make it writable from a const
qualified member function.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Storage qualifiers determine the lifetime of the defined variables or functions. By default, a variable defined within a block has automatic lifetime, which is the duration of the block. There are three types of storage qualifiers:
static
marks the variable is alive for the duration of the program. Static variables are commonly used for keeping state between instances of a given function or method. Static variables are stored globally, even if they are stored in a class.
register
marks the variables as register variables, which are stored in processor registers. Register variables are faster and easier to access and operate on. Note using register
only suggest the compiler that particular automatic variables should be allocated to CPU registers, if possible. The compiler may or may not actually store the variable in a register. Register variables should only be used if you have a detailed knowledge of the architecture and compiler for the computer you are using.
extern
defines the variables or functions in a separate translation unit and are linked with the code by the linker step of the compiler. In other words, you can define variables or functions in some source files or classes, and use them in other source files/classes by using extern
qualifier to declare them in other source files or classes.
I just enabled commenting on this blog with Disqus.
Feel free to leave comments on the blog posts you’re interested. I look foward to having conversations with people who spend some time reading my blog.
Cheers,
Euccas
]]>