Chris Woodroofe, Gatwick's chief operating officer, said on Thursday afternoon there had been another drone sighting which meant it was impossible to say when the airport would reopen.
He told BBC News: "There are 110,000 passengers due to fly today, and the vast majority of those will see cancellations and disruption. We have had within the last hour another drone sighting so at this stage we are not open and I cannot tell you what time we will open.
"It was on the airport, seen by the police and corroborated. So having seen that drone that close to the runway it was unsafe to reopen."
Defacement: Sellers armed with the accounts of Amazon distributors (sometimes legitimately, sometimes through the black market) can make all manner of changes to a rival's listings, from changing images to altering text to reclassifying a product into an irrelevant category, like "sex toys."
Phony fires: Sellers will buy their rival's product, light it on fire, and post a picture to the reviews, claiming it exploded. Amazon is quick to suspend sellers for safety claims.
[...]
Over the following days, Harris came to realize that someone had been targeting him for almost a year, preparing an intricate trap. While he had trademarked his watch and registered his brand, Dead End Survival, with Amazon, Harris hadn't trademarked the name of his Amazon seller account, SharpSurvival. So the interloper did just that, submitting to the patent office as evidence that he owned the goods a photo taken from Harris' Amazon listings, including one of Harris' own hands lighting a fire using the clasp of his survival watch. The hijacker then took that trademark to Amazon and registered it, giving him the power to kick Harris off his own listings and commandeer his name.
[...]
There are more subtle methods of sabotage as well. Sellers will sometimes buy Google ads for their competitors for unrelated products -- say, a dog food ad linking to a shampoo listing -- so that Amazon's algorithm sees the rate of clicks converting to sales drop and automatically demotes their product.
class Leaf:
def __init__(self):
self.val = 0 # will have a value.
def value(self):
return self.val
class Node:
def __init__(self):
self.children = [] # will have nodes added to it.
def value(self):
return sum(c.value() for c in self.children)
My code made a tree about 600 levels deep, meaning the recursive builder function had used 600 stack frames, and Python had no problem with that. Why would value() then overflow the stack?
The answer is that each call to value() uses two stack frames. The line that calls sum() is using a generator comprehension to iterate over the children. In Python 3, all comprehensions (and in Python 2 all except list comprehensions) are actually compiled as nested functions. Executing the generator comprehension calls that hidden nested function, using up an extra stack frame.
It’s roughly as if the code was like this:def value(self):
def _comprehension():
for c in self.children:
yield c.value()
return sum(_comprehension())
Here we can see the two function calls that use the two frames: _comprehension() and then value().
Comprehensions do this so that the variables set in the comprehension don’t leak out into the surrounding code. It works great, but it costs us a stack frame per invocation.
That explains the difference between the builder and the summer: the summer is using two stack frames for each level of the tree. I’m glad I could fix this, but sad that the code is not as nice as using a comprehension:class Node:
...
def value(self):
total = 0
for c in self.children:
total += c.value()
return total
Oh well.
simpleservo
crate into an API to embed Servo on new platforms without worrying about the details.Content-Type
charset values for documents.GOODEVENING HBO
晚上好,HBO
FROM CAPTAIN MIDNIGHT
(这条消息)来自午夜队长
\$12.95/MONTH ?
每月(要支付)12.95美元?
NO WAY !
别想啦!
[SHOWTIME/MOVIE CHANNEL BEWARE!]
[娱乐时间电视网、电影频道(即美国的两个付费影视频道),给我小心一点!]
12345678910111213141516171819202122232425262728293031 |
type Mutex struct { key int32; sema int32;} func xadd(val *int32, delta int32) (new int32) { for { v := *val; if cas(val, v, v+delta) { return v+delta; } } panic("unreached")} func (m *Mutex) Lock() { if xadd(&m.key, 1) == 1 { // changed from 0 to 1; we hold lock return; } sys.semacquire(&m.sema); } func (m *Mutex) Unlock() { if xadd(&m.key, -1) == 0 { // changed from 1 to 0; no contention return; } sys.semrelease(&m.sema); } |
获取锁
, 指当前的gotoutine拥有锁的所有权,其它goroutine只有等待。
互斥锁有两种状态:正常状态和饥饿状态。
在正常状态下,所有等待锁的goroutine按照FIFO
顺序等待。唤醒的goroutine不会直接拥有锁,而是会和新请求锁的goroutine竞争锁的拥有。新请求锁的goroutine具有优势:它正在CPU上执行,而且可能有好几个,所以刚刚唤醒的goroutine有很大可能在锁竞争中失败。在这种情况下,这个被唤醒的goroutine会加入到等待队列的前面。 如果一个等待的goroutine超过1ms没有获取锁,那么它将会把锁转变为饥饿模式。
在饥饿模式下,锁的所有权将从unlock的gorutine直接交给交给等待队列中的第一个。新来的goroutine将不会尝试去获得锁,即使锁看起来是unlock状态, 也不会去尝试自旋操作,而是放在等待队列的尾部。
如果一个等待的goroutine获取了锁,并且满足一以下其中的任何一个条件:(1)它是队列中的最后一个;(2)它等待的时候小于1ms。它会将锁的状态转换为正常状态。
正常状态有很好的性能表现,饥饿模式也是非常重要的,因为它能阻止尾部延迟的现象。
1234 |
type Mutex struct { state int32 sema uint32} |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136 |
func (m *Mutex) Lock() { // 如果mutext的state没有被锁,也没有等待/唤醒的goroutine, 锁处于正常状态,那么获得锁,返回. // 比如锁第一次被goroutine请求时,就是这种状态。或者锁处于空闲的时候,也是这种状态。 if atomic.CompareAndSwapInt32(&m.state, 0, mutexLocked) { return } // 标记本goroutine的等待时间 var waitStartTime int64 // 本goroutine是否已经处于饥饿状态 starving := false // 本goroutine是否已唤醒 awoke := false // 自旋次数 iter := 0 // 复制锁的当前状态 old := m.state for { // 第一个条件是state已被锁,但是不是饥饿状态。如果时饥饿状态,自旋时没有用的,锁的拥有权直接交给了等待队列的第一个。 // 第二个条件是还可以自旋,多核、压力不大并且在一定次数内可以自旋, 具体的条件可以参考`sync_runtime_canSpin`的实现。 // 如果满足这两个条件,不断自旋来等待锁被释放、或者进入饥饿状态、或者不能再自旋。 if old&(mutexLocked|mutexStarving) == mutexLocked && runtime_canSpin(iter) { // 自旋的过程中如果发现state还没有设置woken标识,则设置它的woken标识, 并标记自己为被唤醒。 if !awoke && old&mutexWoken == 0 && old>>mutexWaiterShift != 0 && atomic.CompareAndSwapInt32(&m.state, old, old|mutexWoken) { awoke = true } runtime_doSpin() iter++ old = m.state continue } // 到了这一步, state的状态可能是: // 1. 锁还没有被释放,锁处于正常状态 // 2. 锁还没有被释放, 锁处于饥饿状态 // 3. 锁已经被释放, 锁处于正常状态 // 4. 锁已经被释放, 锁处于饥饿状态 // // 并且本gorutine的 awoke可能是true, 也可能是false (其它goutine已经设置了state的woken标识) // new 复制 state的当前状态, 用来设置新的状态 // old 是锁当前的状态 new := old // 如果old state状态不是饥饿状态, new state 设置锁, 尝试通过CAS获取锁, // 如果old state状态是饥饿状态, 则不设置new state的锁,因为饥饿状态下锁直接转给等待队列的第一个. if old&mutexStarving == 0 { new |= mutexLocked } // 将等待队列的等待者的数量加1 if old&(mutexLocked|mutexStarving) != 0 { new += 1 << mutexWaiterShift } // 如果当前goroutine已经处于饥饿状态, 并且old state的已被加锁, // 将new state的状态标记为饥饿状态, 将锁转变为饥饿状态. if starving && old&mutexLocked != 0 { new |= mutexStarving } // 如果本goroutine已经设置为唤醒状态, 需要清除new state的唤醒标记, 因为本goroutine要么获得了锁,要么进入休眠, // 总之state的新状态不再是woken状态. if awoke { if new&mutexWoken == 0 { throw("sync: inconsistent mutex state") } new &^= mutexWoken } // 通过CAS设置new state值. // 注意new的锁标记不一定是true, 也可能只是标记一下锁的state是饥饿状态. if atomic.CompareAndSwapInt32(&m.state, old, new) { // 如果old state的状态是未被锁状态,并且锁不处于饥饿状态, // 那么当前goroutine已经获取了锁的拥有权,返回 if old&(mutexLocked|mutexStarving) == 0 { break } // 设置/计算本goroutine的等待时间 queueLifo := waitStartTime != 0 if waitStartTime == 0 { waitStartTime = runtime_nanotime() } // 既然未能获取到锁, 那么就使用sleep原语阻塞本goroutine // 如果是新来的goroutine,queueLifo=false, 加入到等待队列的尾部,耐心等待 // 如果是唤醒的goroutine, queueLifo=true, 加入到等待队列的头部 runtime_SemacquireMutex(&m.sema, queueLifo) // sleep之后,此goroutine被唤醒 // 计算当前goroutine是否已经处于饥饿状态. starving = starving || runtime_nanotime()-waitStartTime > starvationThresholdNs // 得到当前的锁状态 old = m.state // 如果当前的state已经是饥饿状态 // 那么锁应该处于Unlock状态,那么应该是锁被直接交给了本goroutine if old&mutexStarving != 0 { // 如果当前的state已被锁,或者已标记为唤醒, 或者等待的队列中不为空, // 那么state是一个非法状态 if old&(mutexLocked|mutexWoken) != 0 || old>>mutexWaiterShift == 0 { throw("sync: inconsistent mutex state") } // 当前goroutine用来设置锁,并将等待的goroutine数减1. delta := int32(mutexLocked - 1<<mutexWaiterShift) // 如果本goroutine是最后一个等待者,或者它并不处于饥饿状态, // 那么我们需要把锁的state状态设置为正常模式. if !starving || old>>mutexWaiterShift == 1 { // 退出饥饿模式 delta -= mutexStarving } // 设置新state, 因为已经获得了锁,退出、返回 atomic.AddInt32(&m.state, delta) break } // 如果当前的锁是正常模式,本goroutine被唤醒,自旋次数清零,从for循环开始处重新开始 awoke = true iter = 0 } else { // 如果CAS不成功,重新获取锁的state, 从for循环开始处重新开始 old = m.state } }} |
1234567891011121314151617181920212223242526272829303132333435363738 |
func (m *Mutex) Unlock() { // 如果state不是处于锁的状态, 那么就是Unlock根本没有加锁的mutex, panic new := atomic.AddInt32(&m.state, -mutexLocked) if (new+mutexLocked)&mutexLocked == 0 { throw("sync: unlock of unlocked mutex") } // 释放了锁,还得需要通知其它等待者 // 锁如果处于饥饿状态,直接交给等待队列的第一个, 唤醒它,让它去获取锁 // 锁如果处于正常状态, // new state如果是正常状态 if new&mutexStarving == 0 { old := new for { // 如果没有等待的goroutine, 或者锁不处于空闲的状态,直接返回. if old>>mutexWaiterShift == 0 || old&(mutexLocked|mutexWoken|mutexStarving) != 0 { return } // 将等待的goroutine数减一,并设置woken标识 new = (old - 1<<mutexWaiterShift) | mutexWoken // 设置新的state, 这里通过信号量会唤醒一个阻塞的goroutine去获取锁. if atomic.CompareAndSwapInt32(&m.state, old, new) { runtime_Semrelease(&m.sema, false) return } old = m.state } } else { // 饥饿模式下, 直接将锁的拥有权传给等待队列中的第一个. // 注意此时state的mutexLocked还没有加锁,唤醒的goroutine会设置它。 // 在此期间,如果有新的goroutine来请求锁, 因为mutex处于饥饿状态, mutex还是被认为处于锁状态, // 新来的goroutine不会把锁抢过去. runtime_Semrelease(&m.sema, true) }} |
1234567891011121314151617181920 |
package mainimport ( "sync" "time")func main() { var mu sync.Mutex go func() { mu.Lock() time.Sleep(10 * time.Second) mu.Unlock() }() time.Sleep(time.Second) mu.Unlock() select {}} |
--target=riscv64-linux-musl
is sufficient to complete this step. The other
major piece is the C standard library, or libc. Unlike the C compiler, this step
required some extra effort on my part - the RISC-V port of musl libc, which
Alpine Linux is based on, is a work-in-progress and has not yet been upstreamed.apk add \
-X https://mirror.sr.ht/alpine/main/ \
--allow-untrusted \
--arch=riscv64 \
--root=/mnt \
alpine-base alpine-sdk vim chrony
/bin/busybox --install
and apk fix
on first boot. This is still a work
in progress, so configuring the rest is an exercise left to the reader until I
can clean up the process and make a nice install script. Good luck!
1.阿里的反戈一击
为什么钉钉这样的产品,却没有诞生在更擅长IM领域的腾讯?
这还得从2014年马云强推来往的惨痛失败说起。
现在回头看,来往的失败是注定的,但经此一役,阿里终于看清了自己的基因,以及在推广来往过程中,展现出了对B端强有力的号召力;而对于微信,阿里内部也知道了不能死磕,要寻找信息碎片化下的软肋。
寻找突破口的机会,是陈航抢来的,注定了钉钉诞生在求生和雪恨的欲望中。
陈航是当时来往是负责人,也是阿里有名的“连续失败者”,做一淘,没成功;做“来往”,还是不行。在阿里这样的公司,连续失败意味着机会会越来越少。
每天都看着来往的数据在掉,陈航的压力巨大,“人都是有感情的,一帮兄弟跟着你混,如果说就地解散那是什么感觉?必须要给他们一个交代,怎么办?这个战场不行换个战场打!”
“成功就是从失败到失败,也依然不改热情”丘吉尔说出这句名言时正值他人生的至暗时刻,面对滞留在敦刻尔克三十万年轻士兵的生死,丘吉尔顶住巨大的压力做出了宣战选择。
陈航也需要做一个选择,接受失败的结果,还是重新开始,对他来说,钉钉是自己最后一次机会。“如果再次失败,团队会解散,资源会重组” 陈航后来自己也说“当时没有什么高大上的想法,就是想要活下去。”
带着来往的一帮“残兵败将”,陈航和团队一头扎进了湖畔花园,对阿里人来说,这里是一切故事开始的地方。
采用民房创业的好处就是,团队可以近乎不眠不休地专注于项目研发,湖畔花园有一个游泳池。每次工作得焦头烂额时,陈航和其他人就会一头扎进游泳池里。通过这种方式,度过了最难熬的日子。
事实上,陈航的洞察没有错,在钉钉之前,中小企业内部沟通的替代品就是QQ和微信,但这显然没有真正满足管理的潜在需求。
企业最大的痛点在沟通达成协同的成本,对于一条工作指令,在微信时代可能被忽视,被搁置,进而影响目标的执行力和达成效率,这会让企业主极度缺乏掌控感。
这就给了钉钉机会。
现在看,钉钉从一开始就围绕企业老板的需求做的思路无疑是正确的,虽然在这一点上,团队也有过争议,但是陈航认为,只有企业的老板认为这款软件有用才可能让自己的员工都使用。
消息必达是钉钉最核心的功能,也是外界争议和吐槽最多的地方,因为它让管理者给了执行者直接的压力。钉钉发出的消息可以看到阅读状态,在钉钉群聊中,不仅可以显示阅读状态以及哪几位未读,并且可以选择把消息通过钉一下的形式推送给未读者,推送的途径是通过电话形式到达对方手机,确保信息及时传达到。
从钉钉的在线沟通,到短信,再到电话,一直确保信息必达为止。
“我们做的最核心的技术创新,是把互联网与电话网络无缝融合。”陈航说,原来在微信上聊天聊很长时间,但效率没有提高,因为微信只是一个聊天的工具,但是工作是有强制性的。
这些功能并不是来自于产品经理的YY,为了搞清楚企业沟通中的痛点到底是什么,钉钉团队曾经疯狂地对 1200家中小企业做了调研,“有时候用户自己也不知道到底要什么,你去问是问不出来的,但是需求是可以从观察中感受的,我们当时的要求就是让团队待在用户企业里去看每一件事,但用户是有戒心的,所以必须要和他们吃在一起喝在一起。”
获得了核心的场景洞察,还要懂得如何推向市场,从钉钉最早的三板斧中,我们不难看出钉钉接地气的打法。
首先是抓住企业经营者最核心的痛点,但同时,也要给出一条决策者和工作者都能接受的目标——大家都要增加工作效率。
第二,完善的使用教育——企业申请钉钉后,首先可以得到完整的使用培训。
第三,免费——想想360当年如何拿下了杀毒市场,而当时国内的同行们,更多的是在苦大仇深的强调收费必要性。
在这三板斧之下,钉钉在市场上的成功是可以预见的。从2015年1月钉钉在北京正式发布开始,三年时间,钉钉的企业组织超过700万家; 2017年底,钉钉的个人用户突破一亿。
“一开始我们只想活下去,后来发现踩到了企业服务市场的大风口”陈航总结。
2.企鹅toB的缓慢转身
就在陈航一行人跑到湖畔花园创业的时候,腾讯这边也发生了一件大事。
2014年4月腾讯与京东宣布合作:京东将收购腾讯B2C平台QQ网购和C2C平台拍拍网的100%权益、物流人员和资产。
促成这笔交易的是高瓴资本的张磊,据说当时腾讯内部对于电商的“卖身”有过反对,但张磊靠着“库存”说服了马化腾:做电商要做库存管理,做到1000亿元的时候,可能会有两三百亿元的库存,每天都要检查,否则会被人偷、被人贪污、被人损耗。张磊最后对马化腾说:“腾讯最大的问题不是赚钱,而是要减少不该花的时间和精力”马化腾想了想,同意了。
这个故事后来被写进了《创京东》中,现在看来,腾讯toB焦虑的种子,很早就种下了。
微信曾经给整个腾讯带来了巨大的流量和安全感,但这种安全感并不是永久的,微信没有经历过失败,它的很多理念都成了权威,这是很危险的。
事实上,市面上很多产品的都很尴尬的发现,在微信几乎覆盖了几乎全部智能手机用户的时代,任何获取用户的行为看上去都像是虎口拔牙。
腾讯上一次忌惮的产品,是抖音,想想抖音连社区都还没发展起来,却在短时间就成了微信生态的大敌,因为它准确切入了用户生活娱乐的场景,找到了微信没有满足用户的那个点。
但随着用户场景的变化,和用户需求的加深,微信一定会迎来一个流量逐渐流失的周期。这个周期和更多元的场景有关,也和用户在沟通中扮演的不同角色有关。
因为企业沟通是有强制性的,所以冲破流量封锁,首当其冲的机会就在这里。
但要说腾讯不重视企业IM的市场,似乎也不尽然。
腾讯通RTX是腾讯最早推出的企业级服务产品,试图以实时通信切入企业服务市场,它的前身是BQQ,企业版QQ。除此之外,腾讯在近年还推出过TIM,以及刚刚被扶正的企业微信。
也许你很容易就发现一个问题,腾讯内部的多款同类产品赛马竞争,却没有做出一款可以直接挑战钉钉的杀手级应用,而企业微信开始受到重视,已经是2018年的消息了。
黄铁鸣是腾讯企业微信的产品部总经理,同时,也是张小龙团队的成员。2005年 Foxmail 被腾讯收购后,黄铁鸣便跟随张小龙进入腾讯,共同完成了 QQ 邮箱、微信等腾讯关键产品的创立,被不少人认为他是张小龙的门徒。
对张小龙来说,微信的成功,很大程度上来源于功能上极致的简洁,张小龙曾经不止一次地向团队强调功能聚焦的重要性“一个APP只做一件事情,一个大而全的APP意味着全面的平庸。”
时至今日,在企业微信上,我们依然能看到这种“简洁”的产品观,除了核心的沟通,企业微信相比钉钉并没有太多功能。
打开企业微信,你会发现这款软件构建了工作中基础的内部通讯模块,同时配备了简单的日程管理、审批报备、文件云盘等基础功能,而对于企业工作流程中诸如销售 CRM、行政管理、薪酬社保等其他应用,只能选择“第三方应用”的中心化入口(一个类似于微信小程序的入口)。
相反的,“连接器”却是出现频率最高的词语,这也与腾讯集团在“3Q 大战”之后腾讯提倡的“连接”定位一致。
腾讯在移动办公的战略非常简单:用社交产品中 IM(即时通讯)的强项,辅以开放的接口与插件,最终希望通过ISV 提供各个细分场景的应用,希望用“连接器”搞定办公软件的一体化。
但在对ISV的态度上,钉钉CEO陈航就有自己的看法,“钉钉与ISV要做融合,不做连接。”他认为唯有融合,才能让双方在资源投入最大化的前提下做好产品,而后者只是对平台流量的贪念。
3. 要庞然大物还是要简洁
一位SaaS领域的资深人士在评价两者时表示:现在腾讯还在用“沟通”来诠释企业办公,在当下显得太肤浅了,在更复杂的企业管理场景下,企业会希望有一整套的解决方案,而不是一个聊天的功能。
相比企业微信,钉钉要庞大得多。
“现在越来越难回答钉钉是什么,因为产品的边界在不断拓展以适应更复杂的场景。”这是陈航的观点,和腾讯的产品所强调的简洁完全不同。
比赛中最可怕的一点是什么?前面领先的对手,一直比你跑得快!
在诞生1400天之后,钉钉已经从一个移动办公的IM工具,变成了如今一套覆盖各种工作场景的企业服务生态。
从软件层面的 IM、OA 延伸到HR SaaS、智能客服,再跨出企业软件的范畴研发硬件,钉钉可能是国内最为痴迷推销硬件的企业服务公司。
在今年,钉钉连续发布的智能前台M2(考勤机)、智能通讯中心C1(路由器)、智能投屏FOCUS等办公产品;配合阿里云和其他商旅产品,钉钉可以对企业经营中更多的类似打卡、会议、审批、出差、报销等工作场景达到覆盖。
同时,在餐饮、零售、快消、物流等垂直领域,钉钉联合西贝莜面村、大润发等各行业的头部公司,输出企业办公管理解决方案。
软件、硬件、解决方案是钉钉如今的“三驾马车”。
陈航后来这样解释钉钉:“工具属性只是载体,而真正能产生重要影响的,是钉钉通过日志周报、已读回执、DING一下等功能体现出的管理思想——对工作效率的尊重。”
免费依然是撒手锏,巨头的涌入和关注并没有带给行业春天,一位业内人士透露,目前国内大量SaaS公司都处在活得不那么滋润,但也死不了的状态。当免费成为市场主流,SaaS企业不得不去面对“难以收费”的行业现实。“但我们也会积极寻求和大公司合作”这位SaaS行业人士最后说。
而在市值4000亿美元的阿里巴巴庇护下,钉钉得以用自己的节奏野蛮生长,而不用像其他SaaS企业的同行们那样,对收费使用、续费率这些事苦大仇深。
2018年秋冬发布会上,陈航公布了钉钉创业四周年的成果,企业管理中的各种场景被归纳在“人、财、物、事”四个关键词后面。
和往常一样,陈航照例在发布会结尾,强调了钉钉脱胎于来往的失败。是“向死而生”的产物。用一个工具连接市场,再辅以各种场景化的服务和可落地的管理解决方案,钉钉作为入口的背后,是数万亿的企服市场。
4.腾讯不会做另一个钉钉
对腾讯来说,要想打败钉钉,就不可能做下一个钉钉。
在企业微信的显眼位置,有一个“下班了”的功能,目前这也是用户点赞最多的功能,能够看出,企业微信希望在产品端尽量减少办公沟通时的员工压力,用腾讯的话说:“愿意在非工作时间工作的人打造一个纯粹的工作沟通环境。”
这些都是腾讯产品“善良”“有温度”的价值观体现,但这项很受员工欢迎的功能却被一些资深的行业人士所批评,行业人士认为从这个功能可以看出,企业微信思考的是如何更少地去打扰到用户,这依然是腾讯toC的产品思维,“工作上需要协同的,你休息一下,他也休息一下,怎么开展工作?”
对这些功能陈航不以为然,“至少我不用担心他们会再做出一个钉钉,以我的了解,今天他们不会做这个事情。因为我们追求的人性诉求和微信追求的人性诉求不一样,微信追求接收者的诉求,我们面对的是发送者的诉求,要用直接手段提升工作效率。”
围绕B端领域,阿里和腾讯已经开始了短兵相接,不再是投资小巨头的“代理人战争”,这次是双方亲自下场,刺刀都拿在手上了。
面对关于腾讯toB重心是否带来压力的疑问,陈航偶尔也会私下承认“微信拥有7亿用户,可以迅速获得第一批种子用户,轻松起跑。这对钉钉未来来说,也会有潜在的压力。”
一定程度而言,钉钉和企业微信的差异,是中国最大的两家互联网公司所展现出的产品哲学的差异,以及背后大相庭径的企业管理方式和价值观的体现。
在这个冬天 ,如果一定要让人选择一款企业im。
员工也许会喜欢企业微信,但是老板一定会选择钉钉。
但问题是,谁才是决策者呢?
本文由舍予兄(VX:shuyang9451)原创,刊登与转载请尊重版权,保留此句,也欢迎toB领域的探讨与交流。
原文链接:《钉钉刺痛腾讯,万亿企服市场的沉默战争》
go
command defaults to module mode when run
in directory trees outside GOPATH/src and
marked by go.mod
files in their roots.
This setting can be overridden by setting the transitional
environment variable $GO111MODULE
to on
or off
;
the default behavior is auto
mode.
We’ve already seen significant adoption of modules across the Go community,
along with many helpful suggestions and bug reports
to help us improve modules.
auto
mode by default.
In addition to many bug fixes and other minor improvements,
perhaps the most significant change in Go 1.12
is that commands like go
run
x.go
or go
get
rsc.io/2fa@v1.1.0
can now operate in GO111MODULE=on
mode without an explicit go.mod
file.
auto
to on
)
and deprecate GOPATH mode.
In order to do that, we’ve been working on better tooling support
along with better support for the open-source module ecosystem.
go
vet
to support modules,
we introduced a generalized framework for incremental
analysis of Go programs,
in which an analyzer is invoked for one package at a time.
In this framework, the analysis of one package can write out facts
made available to analyses of other packages that import the first.
For example, go
vet
’s analysis of the log package
determines and records the fact that log.Printf
is a fmt.Printf
wrapper.
Then go
vet
can check printf-style format strings in other packages
that call log.Printf
.
This framework should enable many new, sophisticated
program analysis tools to help developers find bugs earlier
and understand code better.
go
get
was that it was decentralized:
we believed then—and we still believe today—that
anyone should be able to publish their code on any server,
in contrast to central registries
such as Perl’s CPAN, Java’s Maven, or Node’s NPM.
Placing domain names at the start of the go
get
import space
reused an existing decentralized system
and avoided needing to solve anew the problems of
deciding who can use which names.
It also allowed companies to import code on private servers
alongside code from public servers.
It is critical to preserve this decentralization as we shift to Go modules.
goimports
to add
imports for packages that have not yet been downloaded to the local system.
go
get
relies on connection-level authentication (HTTPS or SSH)
to check that it is talking to the right server to download code.
There is no additional check of the code itself,
leaving open the possibility of man-in-the-middle attacks
if the HTTPS or SSH mechanisms are compromised in some way.
Decentralization means that the code for a build is fetched
from many different servers, which means the build depends on
many systems to serve correct code.
go.sum
file in each module;
that file lists the cryptographic hash
of the expected file tree for each of the module’s dependencies.
When using modules, the go
command uses go.sum
to verify
that dependencies are bit-for-bit identical to the expected versions
before using them in a build.
But the go.sum
file only lists hashes for the specific dependencies
used by that module.
If you are adding a new dependency
or updating dependencies with go
get
-u
,
there is no corresponding entry in go.sum
and therefore
no direct authentication of the downloaded bits.
go.sum
file
that go
get
can use to authenticate modules
when adding or updating dependencies.
go
command check notarized hashes
for publicly-available modules not already in go.sum
starting in Go 1.13.
go
get
fetches code from multiple origin servers,
fetching code is only as fast and reliable as the slowest,
least reliable server.
The only defense available before modules was to vendor
dependencies into your own repositories.
While vendoring will continue to be supported,
we’d prefer a solution that works for all modules—not just the ones you’re already using—and
that does not require duplicating a dependency into every
repository that uses it.
go
command asks for modules,
instead of the origin servers.
One important kind of proxy is a module mirror,
which answers requests for modules by fetching them
from origin servers and then caching them for use in
future requests.
A well-run mirror should be fast and reliable
even when some origin servers have gone down.
We are planning to launch a mirror service for publicly-available modules in 2019.
JFrog’s GoCenter and Microsoft’s Athens projects are planning mirror services too.
(We anticipate that companies will have multiple options for running
their own internal mirrors as well, but this post is focusing on public mirrors.)
go
command starting in Go 1.13.
Using an alternate mirror, or no mirror at all, will be trivial
to configure.
go
command
and any sites like godoc.org—fetched code directly from each code host.
Now they can fetch cached code from a fast, reliable mirror,
while still authenticating that the downloaded bits are correct.
And the index service makes it easy for mirrors, godoc.org,
and any other similar sites to keep up with all the great new
code being added to the Go ecosystem every day.
trait Foo {
fn foo(&self, i32);
}
The above is legal in Rust 2015, but not in Rust 2018 (method arguments must be
made explicit). Rustfix changes the above code to:trait Foo {
fn foo(&self, _: i32);
}
For detailed information on how to use Rustfix, see these instructions.
To transition your code from the 2015 to 2018 edition, run cargo fix --edition
.allow
), and configured as either an error (deny
) or
warning (warn
).iter_next_loop
lint checks that you haven't made an error by iterating on the result of next
rather than the object you're calling next
on (this is an easy mistake to make
when changing a while let
loop to a for
loop).for x in y.next() {
// ...
}
will give the errorerror: you are iterating over `Iterator::next()` which is an Option; this will compile but is probably not what you want
--> src/main.rs:4:14
|
4 | for x in y.next() {
| ^^^^^^^^
|
= note: #[deny(clippy::iter_next_loop)] on by default
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#iter_next_loop
Clippy works by extending the Rust compiler. The compiler has support for a few
built-in lints, Clippy uses the same mechanisms but with lots more lints. That
means Clippy's error/warning format should be familiar, you should be able to
apply Clippy's suggestions in your IDE (or using Rustfix), and that the lints
are reliable and accurate.deny
), but may throw new warnings.rustup component add clippy
, then use it with
cargo clippy
. For more information, including how to run it in your CI, see
the repo readme.cargo fmt --check
), then you don't need to worry about code style in review. By using a
standard style you make your project feel more familiar for new contributors and
spare yourself arguments about code style. Rust's standard code
style is
the Rustfmt default, but if you must, then you can customize Rustfmt
extensively.rustup component add rustfmt
. To format your project,
use cargo fmt
. You can also format individual files using rustfmt
(though
note that by default rustfmt will format nested modules). You can also use
Rustfmt in your editor or IDE using the RLS (see below; no need to install
rustfmt for this, it comes as part of the RLS). We recommend configuring your
editor to run rustfmt on save. Not having to think about formatting at all as
you type is a pleasant change.